A group of artificial intelligence (AI) experts and executives have banded together to urge a six month pause in developing more advanced systems than OpenAI’s newly launched GPT-4.
The call came in an open letter which cited the risks to society, and big name tech luminaries have added their signature to the letter.
This includes Steve Wozniak, co-founder of Apple; Elon Musk, CEO of SpaceX, Tesla and Twitter; researchers at DeepMind; AI heavyweight Yoshua Bengio (often referred to as one of the “godfathers of AI”); and Professor Stuart Russell, a pioneer of research in the field.
However there were some notable omissions of people who did not sign the open letter, including Alphabet CEO Sundar Pichai; Microsoft CEO Satya Nadella; and OpenAI chief executive Sam Altman, who earlier this month publicly stated he was a “little bit scared” of artificial intelligence technology and how it could affect the workforce, elections and the spread of disinformation.
Interest in AI has surged recently thanks to the popularity of AI chatbots like OpenAI’s ChatGPT, and Google’s Bard, driving oversight concerns about the tech.
Earlier this month the US Chamber of Commerce called or the regulation of artificial intelligence technology.
And this week the UK government set out its own plans for ‘adaptable’ regulations to govern AI systems.
Now the open letter made clear the alarm about advanced AI, being felt among high-level experts, academics and executives about the technology.
“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs,” the letter states.
Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”
“Such decisions must not be delegated to unelected tech leaders,” the letter states. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the letter stated. “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
“Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an ‘AI summer’ in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt,” said the letter.
“Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.”
Both Elon Musk and Steve Wozniak, and others including the late Professor Stephen Hawking have warned about the dangers of AI in previous years.
Indeed Professor Hawking warned artificial intelligence could spell the end of life as we know it on Planet Earth.
Professor Hawking also predicted that humanity has just 100 years left before the machines take over.
Musk meanwhile was a co-founder of OpenAI – though he resigned from the board of the organisation years ago.
However Musk previously stated he believes AI poses a real threat to humans if unchecked, and in 2014 tweeted that artificial intelligence could evolve to be “potentially more dangerous than nukes”.
In 2015 Musk donated $10 million to the Future of Life Institute (FLI) – a non-profit advisory board dedicated to weighing up the potential of AI technology to benefit humanity.
Likewise Apple’s Steve Wozniak predicted eight years ago that in the future, the world will be controlled by AI and that robots will treat humans as their pets.
Some experts have questioned whether it is realistic to halt the development of even more advanced AI systems and technology.
“Although the development of generative AI is going faster and has a broader impact than expected, it’s naive to believe the development of generative AI can be stopped or even paused,” noted Frederik Mennes, director product management & business strategy at cybersecurity specialist OneSpan.
“There is now a geopolitical element,” said Mennes. “If development would be stopped in the US, other regions will simply catch up and try to take the lead.”
“If there is a need for regulation, it can be developed in parallel,” Mennes concluded. “But the development of the technology is not going to wait until regulation has caught up. It’s just a reality that technology develops and regulation catches up.”
Another expert highlighted what he feels are the two main risks associated with advanced AI technology, but also feels the cat is already out of the bag.
“The interesting thing about this letter is how diverse the signers and their motivations are,” noted Dan Shiebler, head of machine learning at cloud security specialist Abnormal Security.
“Elon Musk has been pretty vocal that he believes AGI (computers figuring out how to make themselves better and therefore exploding in capability) to be an imminent danger, whereas AI sceptics like Gary Marcus are clearly coming to this letter from a different angle,” said Shiebler.
“In my mind, technologies like LLMs (large language models) present two types of serious and immediate risks,” said Shiebler.
“The first is that the models themselves are powerful tools for spammers to flood the internet with low quality content or for criminals to uplevel their social engineering scams,” said Shiebler. “At Abnormal we have designed our cyberattack detection systems to be resilient to these kinds of next-generation commoditised attacks.”
“The second is that the models are too powerful for businesses to not use, but too unpolished for businesses to use safely,” said Shiebler. “The tendency of these models to hallucinate false information or fail to calibrate their own certainty poses a major risk of misinformation.”
“Furthermore, businesses that employ these models risk cyberattackers injecting malicious instructions into their prompts,” said Shiebler. “This is a major new security risk that most businesses are not prepared to deal with.”
“Personally, I don’t think this letter will achieve much,” Shiebler concluded. “The cat is out of the bag on these models. The limiting factor in generating them is money and time, and both of these will fall rapidly. We need to prepare businesses to use these models safely and securely, not try to stop the clock on their development.”
Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector
Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…
Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…
Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…
Explore the future of work with the Silicon In Focus Podcast. Discover how AI is…
Executive hits out at the DoJ's “staggering proposal” to force Google to sell off its…