AI Researchers In West, China Identify AI ‘Red Lines’

Leading Western and Chinese AI researchers have said China and the West must work together to mitigate existential risks associated with artificial intelligence (AI), following the International Dialogue on AI Safety (IDAIS) in Beijing last week.

In a joint statement the researchers, who include some of the field’s most prominent names, compared such an effort to bilateral efforts during the Cold War to avert a world-ending nuclear conflict.

“In the depths of the cold war, international scientific and governmental co-ordination helped avert thermonuclear catastrophe. Humanity again needs to co-ordinate to avert a catastrophe that could arise from unprecedented technology,” said the statement, issued following the conference.

AI could pose “catastrophic or even existential risks to humanity within our lifetimes”, they wrote.

Researchers issued a statement on AI ‘red lines’ following the International Dialogue on AI Safety in Beijing on 10-11 March 2024. Image credit: IDAIS

‘Red lines’

The signatories included Geoffrey Hinton and Yoshua Bengio, who won a Turing Award for their work on neural networks and are at times described as the “godfathers of AI”.

Others included Stuart Russell, a leading professor of computer science at the University of California, Berkeley, and Andrew Yao, one of China’s top computer scientists.

The statement identified “red lines” that AI systems should not cross.

It said, for instance, that AI systems should not “be able to copy or improve itself without explicit human approval and assistance” or “take actions to unduly increase its power and influence”.

Deception

No system should be able to “substantially increase the ability of actors to design weapons of mass destruction, violate the biological or chemical weapons convention” or be able to “autonomously execute cyber attacks resulting in serious financial losses or equivalent harm”, the scientists said.

The scientists also felt AIs should be restricted from being able to deceive their own creators into misunderstanding the likelihood that they would cross one of the other red lines.

Toby Ord, senior research fellow at Oxford University, said he attended the forum and found that “when it comes to AI safety (and the red lines humanity must never cross) there was remarkable agreement”.

AI safety

IDAIS is a series of events supported by Berkeley, California-based non-profit AI research group Far AI, with the first conference having been held in the UK last year.

The UK also last year hosted the AI Safety Summit, which was attended by political, technology and business figures from around the world.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

UK’s CMA Readies Cloud Sector “Behavioural” Remedies – Report

Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector

6 hours ago

Former Policy Boss At X Nick Pickles, Joins Sam Altman Venture

Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…

8 hours ago

Bitcoin Rises Above $96,000 Amid Trump Optimism

Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…

9 hours ago

FTX Co-Founder Gary Wang Spared Prison

Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…

10 hours ago