Artificial intelligence poses risks to the financial system, according to Consumer Financial Protection Bureau Director Rohit Chopra.
Chopra identified the risks he sees in the emerging technology late last month in testimony before the Senate Banking Committee and House Financial Services Committee. Chopra said AI-fueled misconduct could be “dramatically magnified” if firms depend on the same foundational model or if a fraudster successfully mimics human connection.
“This may not be an accident,” he noted. “This may actually be a purposeful way to disrupt the U.S. financial system and we should look at it with that mindset.”
To prevent AI from harming the financial system, Chopra said regulators must ensure that providers are held responsible for data breaches if they fail to implement sufficient safeguards. “Where there is extremely opaque AI, that magnifies disruptions in a market that turns tremors into earthquakes,” he added.
During an Axios AI+ summit last month, Chopra said existing laws must be considered as more financial institutions adopt AI and acquire data, including those combating fraud and protecting national security, intellectual property and other considerations.
“It’s the winner-take-all dimension of this that makes it much more pressing, and it’s the ability to simulate human interaction in a way that I don’t think we’ve seen before, and the way that that can interfere with human life and perpetuate fraud, crime, abuse,” Chopra added.
He expressed concern that AI services are concentrated in only a few companies. To him, long-standing anti-monopoly rules must be enforced. “We won’t be able to unleash the progress of innovation from that,” he said of a lack of competition.
On Dec. 14, the Financial Stability Oversight Council identified AI as a vulnerability in the financial system. According to the agency, though AI can help banks reduce costs and improve efficiencies, identify more complex relationships and improve performance, the technology can also introduce cyber and model risks.
FSOC recommended banks, market participants and regulators strengthen their expertise and capacity to monitor AI innovation and use and identify risks. The agency also called on existing requirements and guidance to apply to AI “to ensure that oversight structures account for emerging risks to the financial system while also facilitating efficiency and innovation.”
The Office of the Comptroller of the Currency has also identified AI as posing emerging risks to banking. According to a Dec. 7 semiannual risk report, AI poses risks both through third-parties and to cybersecurity and consumer protection. The technology can also pose bias and discrimination-related challenges if improperly trained or used with data sets that use data that perpetuates past bias.
The OCC called on banks to manage the technology “in a safe, sound and fair manner, commensurate with the mentality and complexity of the particular risk of the activity or business processes supported by AI usage.”