How to prevent AI from provoking the next financial crisis

0
912


Unlock the Editor’s Digest for free

Amid talk of job cuts due to artificial intelligence, Gary Gensler thinks robots will actually create more work for financial watchdogs. The US Securities and Exchange Commission chair puts the likelihood of an AI-driven financial crisis within a decade as “nearly unavoidable”, without regulatory intervention. The immediate risk is more of a new financial crash than a robot takeover.

Gensler’s critics argue that the risks posed by AI are not novel, and have existed for decades. But the nature of these systems, created by a handful of hugely powerful tech companies, requires a new approach beyond siloed regulation. Machines may make finance more efficient, but could do just as much to trigger the next crisis.

Among the risks Gensler pinpoints is “herding”, in which multiple parties make similar decisions. Such behaviour has played out countless times: the stampede of financial institutions into packages of subprime mortgages sowed the seeds of the 2008 financial crisis. The growing reliance on AI models produced by a few tech companies increases that risk. The opaque nature of the systems also makes it difficult for regulators and institutions to assess what data set they are reliant on.

Another danger lies in the paradox of explainability, noted by Gensler in a paper he co-wrote in 2020 as an MIT academic. If AI predictions could be easily understood, simpler systems could be used instead. It is their ability to produce new insights based on learning that makes them valuable. But it also hampers accountability and transparency; a lending model based on historical data could produce, say, racially biased results, but identifying this would take post facto investigation.

Reliance on AI also entrenches power in the hands of technology companies, which are increasingly making inroads into finance but are not subject to strict oversight. There are parallels with the world of cloud computing in finance. In the west, the triumvirate of Amazon, Microsoft and Google provides services to the biggest lenders. This concentration raises competition concerns, and affords at least the theoretical ability to move markets in the direction of their choice. It also generates systemic risk: an outage at Amazon Web Services in 2021 affected companies ranging from robot vacuum producer Roomba to dating app Tinder. An issue with a trading algorithm could trigger a market crash.

Watchdogs have pushed back against the awkward nexus of technology and finance in the past, as with Meta’s digital currency, Diem, formerly known as Libra. But to mitigate the risks from AI requires expanding the perimeter of financial regulation or pushing authorities across different sectors to collaborate far more effectively. Given the potential for AI to affect every industry, that co-operation should be broad. The history of credit default swaps and collateralised debt obligations shows how dangerous “siloed” thinking can be.

The authorities will also need to take a leaf from the book of those convinced that AI is going to conquer the world, and focus on structural challenges rather than individual cases. The SEC itself proposed a rule in July addressing possible conflicts of interest in predictive data analytics, but it was focused on individual models used by broker-dealers and investment advisers. Regulation should study the underlying systems as much as specific cases.

Neo-Luddism is not warranted; AI is not inherently negative for financial services. It can be used to speed up the delivery of credit, support better trading or combat fraud. That regulators are engaging with the technology is also welcome: further adoption could accelerate data analysis and develop institutional understanding. AI can be a friend to finance, if the watchmen have the right tools to keep it on the rails.



Source link