South Korea and the United Kingdom are hosting the second global AI summit in Seoul, that began on Tuesday and will continue on Wednesday.
Sixteen companies at the forefront of developing Artificial Intelligence pledged on Tuesday to develop the technology safely, at a time when regulators are scrambling to keep pace with its rapid development.
The AI Safety Summit in Seoul, South Korea is the second such global summit, and comes after the United Kingdom had hosted the first ever AI Safety Summit last November, at Bletchley Park.
That 2023 Summit had resulted in the first international declaration on AI – the so called ‘Bletchley Declaration’ – which saw attendees all agreed that artificial intelligence poses a potentially catastrophic risk to humanity, and propose a series of steps to mitigate this risk.
Six months later and the AI Seoul Summit has gathered political leaders, who agree to launch first international network of AI Safety Institutes to boost understanding of AI.
A new agreement was reached between 10 countries plus the European Union, which committed nations to work together to launch an international network to accelerate the advancement of the science of AI safety.
And the UK Republic of Korea has secured commitment from 16 global AI tech companies to a set of safety outcomes, building on Bletchley agreements with expanded list of signatories.
Tech companies at the forefront of AI, including firms from China and the UAE, have committed to not develop or deploy AI models if the risks cannot be sufficiently mitigated. The agreement also commits companies to ensuring accountable governance structures and public transparency on their approaches to frontier AI safety.
Among companies that have signed up to the fresh ‘Frontier AI Safety Commitments’, are:
Where they have not done so already, AI tech companies will each publish safety frameworks on how they will measure risks of their frontier AI models, such as examining the risk of misuse of technology by bad actors.
The frameworks will also outline when severe risks, unless adequately mitigated, would be “deemed intolerable” and what companies will do to ensure thresholds are not surpassed.
In the most extreme circumstances, the companies have also committed to “not develop or deploy a model or system at all” if mitigations cannot keep risks below the thresholds.
On defining these thresholds, companies will take input from trusted actors including home governments as appropriate, before being released ahead of the AI Action Summit in France in early 2025.
“It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on AI safety,” said Prime Minister, Rishi Sunak. “These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI.”
“It sets a precedent for global standards on AI safety that will unlock the benefits of this transformative technology,” said the UK Prime Minister. “The UK’s Bletchley summit was a great success and together with the Republic of Korea we are continuing that success by delivering concrete progress at the AI Seoul Summit.”
“Ensuring AI safety is crucial for sustaining recent remarkable advancements in AI technology, including generative AI, and for maximizing AI opportunities and benefits, but this cannot be achieved by the efforts of a single country or company alone,” added Republic of Korea Minister Lee.
“In this regard, we warmly welcome the ‘Frontier AI Safety Commitments’ established by global AI companies in collaboration with the governments of the Republic of Korea and the UK during the ‘AI Seoul Summit’, and we expect companies to implement effective safety measures throughout the entire AI lifecycle of design, development, deployment and use,” Lee stated.
“We are confident that the ‘Frontier AI Safety Commitments’ will establish itself as a best practice in the global AI industry ecosystem, and we hope that companies will continue dialogues with governments, academia, and civil society, and build cooperative networks with the ‘AI Safety Institute’ in the future,” said Lee.
Attending the AI Seoul Summit are the governments of Australia; Canada; China; France; Germany; India; Italy; Japan; Kingdom of Saudi Arabia; Republic of Korea; Republic of Singapore; Republic of the Philippines; Rwanda; Spain; Switzerland; Turkey; United Arab Emirates, UK, USA and Ukraine.
The event is also being attended by the European Commission, the Organisation for Economic Co-operation and Development (OECD); and the United Nations.
Big name tech attendees include Amazon, Anthropic, Google/Google DeepMind, IBM, Meta, Microsoft, Mistral, OpenAI, Samsung Electronics, Tencent and xAI.
Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…
Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…
Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…
Welcome to Silicon In Focus Podcast: Tech in 2025! Join Steven Webb, UK Chief Technology…
European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…
San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…