Regulators around the globe in a race to curb AI risks | CTech

0
358


Financial regulators are starting to pay attention to artificial intelligence, with the U.S. Treasury Department and the Securities and Exchange Commission (SEC) expressing concerns about how finance executives are incorporating AI systems into their operations. This comes as tools like OpenAI’s ChatGPT and Google’s BERT gain popularity, marketing advanced paid products to corporations.


Treasury Secretary Janet Yellen defined AI as a risk for the first time in the annual report of the Financial Stability Oversight Council (FSOC), which she chairs. “The Council has identified the use of artificial intelligence in financial services as a vulnerability in the financial system,” she said. “The Council will monitor technologies to respond to emerging risks. Support for responsible innovation in the field can enable the financial system to reap benefits, but there are also principles and rules for managing risks that must be implemented.”

1 View gallery

Traders on the New York Stock Exchange. (Credit: EPA/Justin Lane)


Traders on the New York Stock Exchange. (Credit: EPA/Justin Lane)

Traders on the New York Stock Exchange.

(Credit: EPA/Justin Lane)


An ‘inevitable’ financial crisis


In the FSOC report, AI is defined as one of 14 factors that risk market stability. “The Council recommends monitoring rapid developments, including generative artificial intelligence, to ensure that supervisory structures keep pace with or anticipate risks.” FSOC, established after the 2008 financial crisis, identified various risks, including cybersecurity, compliance, privacy, and new issues arising from the use of generative artificial intelligence models. “Errors and biases in artificial intelligence data can become even more challenging to identify and correct as technologies advance in complexity, which emphasizes the need for vigilance from technology developers, financial sector companies, and supervisory regulators,” the FSOC stated, highlighting the danger of these models’ tendency to invent or produce convincing but flawed results.


Apart from the Treasury Secretary, the FSOC includes all major financial regulators in the country, including Federal Reserve Chair Jerome Powell, Consumer Financial Protection Bureau Director Rohit Chopra, and SEC Chairman Gary Gensler. In May, Gensler warned that if the field is not properly regulated and quickly, it will “inevitably” lead to a financial crisis within a decade. He specifically expressed concern that many financial institutions will base their decisions on AI, leading to herd behavior. “The crisis in 2027,” he says, “will be because everything rests on one foundational level, so-called generative artificial intelligence, around which a group of fintech applications was built,” he said.


In July, the SEC proposed new rules requiring brokers and investment advisers to take steps to address conflicts of interest related to their use of predictive analytics and similar technologies. “Data analytics models today increasingly provide the ability to make predictions about each of us as individuals,” said Gensler at the time. “This raises concern about conflicts if advisers or brokers optimize in a way that prioritizes their interests over the interests of their investors.”


In October, President Joe Biden issued a presidential directive on responsible management, development, and use of artificial intelligence, promoting a coordinated government-wide approach. The directive outlines principles and priorities for achieving these goals, including overseeing that technology is secure and safeguarded, promoting innovation, and protecting the interests of American users. “The federal government will enforce appropriate measures to protect against fraud, unintentional bias, discrimination, privacy violations, and other vulnerabilities,” he stated, adding that the Treasury must publish a report within 150 days on “best practices for financial institutions for managing cybersecurity risks for artificial intelligence.”


Meanwhile, several large financial institutions in the U.S. have already declared that they are incorporating AI tools into their work, such as investment company Fidelity, which sees “potential” in using these tools in wealth management, and Goldman Sachs which announced last month that it is working on about 12 projects involving generative AI. Morgan Stanley is launching a bot in collaboration with OpenAI to assist financial advisors, while JPMorgan Ventures is actively recruiting for new AI-related roles. JPMorgan CEO Jamie Dimon believes that technology will help reduce the workweek to just 3.5 days.


This activity has prompted authorities to conduct a deep examination of the subject. The Wall Street Journal recently revealed that the SEC has reached out to investment advisors and asked them to disclose information on how they integrate or use AI tools in their work. In the process known as “Sweep,” the regulator requested information on algorithms used to manage client portfolios, third-party providers, and questions regarding compliance.


On the other side of the ocean, the European Union has already completed legislation on artificial intelligence, expected to be the most advanced of its kind in the world. The legislation, developed over about three years, is the result of lengthy discussions and compromises with leading companies in the field. It regulates how technology can be deployed, data collected and processed, and specifies the protections companies must put in place for the public, including the obligation to explain results and decisions of their systems. The legislation is expected to pass in the spring and take two years to be fully enforced.


Meanwhile, Israel is taking slow steps in this regard. This week, the Ministry of Innovation, Science, and Technology, in collaboration with the Ministry of Justice, published a document aimed at guiding government ministries in this domain. “After completing a comprehensive and professional survey, I decided to adopt the conclusions of the document regarding regulatory policy and ethics on artificial intelligence in Israel,” said Minister Ofir Akunis. “I intend to promote a government decision that will enable the adoption of the policy, guide regulators to operate accordingly, and enable its implementation.”


However, the document itself does not include detailed, specific, and advanced steps like those proposed by the European Union or the United States, which the Ministry of Science document itself emphasizes: “It should be noted once again that the document does not claim to offer specific solutions to the challenges arising from the use of artificial intelligence technology, to set specific orders or a systematic methodology for designing such orders, or to decide weighty questions related to the content of regulation in various sectors or broadly,” the 124-page document concludes. “The principles of regulatory and ethical policy and the recommendations for their implementation presented in this document are intended to promote a process within the government.”

regulators-across-the-world-racing-to-contain-ai-threats-ctech