SEC Chairman Calls For AI Caution, Cites Financial Stability Risk
Gary Gensler urges companies to be cautious over rushing onboard the AI hype train and points out its risk for fraud and financial stability
The head of one federal agency in the United States has sounded a note of caution over the risks for public traded companies diving into artificial intelligence (AI).
The warning came from the chairman of the US Securities and Exchange Commission (SEC), Gary Gensler, in a speech on Tuesday at the Yale Law School.
Gensler was last in the news headlines in early January, when he officially approved the first US-listed exchange traded funds (ETF) to track bitcoin, in what was labelled a watershed moment for the world’s largest cryptocurrency, as well as the broader crypto industry.
AI caution
But Gary Gensler has now this week used a speech to warn people against buying into the current AI feeding frenzy, and beware of misleading AI hype and so called ‘AI-washing’, where publicly-traded firms misleadingly or untruthfully promote their use of AI, which can harm investors and run afoul of US securities law.
“We’ve seen in our economy how one or a small number of tech platforms can come to dominate a field,” said Gensler. “There’s one leading search engine (Google), one leading retail platform (Amazon), and three leading cloud providers (Amazon Web Services, Microsoft Azure, Google Cloud Platform).”
“I think due to the economies of scale and network effects at play we’re bound to see the same develop with AI,” he cautioned. “In fact, we’ve already seen affiliations between the three largest cloud providers and the leading generative AI companies.”
He pointed out that thousands of financial entities are looking to build downstream applications relying on what is likely to be but a handful of base models upstream.
“Such a development would promote both herding and network interconnectedness,” he said. “Thus, AI may play a central role in the after-action reports of a future financial crisis – and we won’t have Tom Cruise in Minority Report to prevent it from happening.”
“The challenges to financial stability that AI may pose in the future will require new thinking on system-wide or macro-prudential policy interventions,” said Gensler. “Regulators and market participants will need to think about the dependencies and interconnectedness of potentially 8,316 brokenhearted financial institutions to an AI model or data aggregator.”
Existing laws
Gensler cited who many regard as the father of computing, Alan Turing, who in 1950 wrote a seminal paper, opening with, “I propose to consider the question, ‘Can machines think?’”
Gensler asked what does that mean for securities law, particularly the laws related to fraud and manipulation?
He warned that there are already plenty of laws to government bad behaviour, despite the recent clamour of new legislation to regulate AI.
“Fraud is fraud, and bad actors have a new tool, AI, to exploit the public,” he said, before pointing out that under the current securities laws, there are many things you can’t do.
He urged financial institutions implementing AI to consider investor protection and ensure they are also putting in place appropriate guardrails.
“Did those guardrails take into account current law and regulation, such as those pertaining to front-running, spoofing, fraud, and providing advice or recommendations?,” he asked.
“Did they test it before deployment and how? Did they continue to test and monitor? What is their governance plan – did they update the various guardrails for changing regulations, market conditions, and disclosures?”
AI-washing
He also cautioned against AI-washing.
“We’ve seen time and again that when new technologies come along, they can create buzz from investors as well as false claims from the Professor Hills of the day,” said Gensler. “If a company is raising money from the public, though, it needs to be truthful about its use of AI and associated risk.”
“As AI disclosures by SEC registrants increase, the basics of good securities lawyering still apply,” he warned. “Claims about prospects should have a reasonable basis, and investors should be told that basis.”
Instead of disclosing those risks using “boilerplate” language about AI, Gensler said, executives should consider whether artificial intelligence plays a significant part in a company’s business, including its internal operations, and craft specific disclosures that speak to those risks.
He also sounded a word of caution about AI-based models providing an increasing ability to make predictions.