The UK under the Labour government continues its efforts surrounding artificial intelligence (AI) safety, that the previous Conservative administration had initiated and championed.
The government on Wednesday announced new support for businesses, to help them “develop and use trustworthy AI products and services.”
To this end, the government said it will offer businesses a new platform to help them assess and mitigate the risks posed by AI. At the same time it also revealed the UK AI Safety Institute has signed a new agreement with Singapore to deepen AI safety collaboration.
The government first off noted that the UK’s market for ensuring the trustworthiness of AI systems is poised to grow six-fold over the next decade – unlocking more than £6.5 billion.
It said around 524 firms currently make up the UK’s AI assurance sector, employing more than 12,000 people and generating more than £1 billion. These businesses provide organisations with the tools they need to develop or use AI safely, the government noted.
Assurance technologies are essentially tools that can help businesses verify, scrutinise and trust the machine learning products they are utilising. Companies producing this tech in the UK include the likes of Holistic AI, Enzai and Advai.
To aid in this matter, the government said it is unveiling “targeted support for businesses across the country to ensure they can develop and deploy safe, trustworthy AI to kickstart growth and improve productivity.”
Key to this will be a new AI assurance platform, the government said, to give “British businesses access to a one-stop-shop for information on the actions they can take to identify and mitigate the potential risks and harms posed by AI.”
This will focus on capitalising on the growing demand for AI assurance tools and services, also partnering with industry to develop a new roadmap, which will help navigate international standards on AI assurance.
“AI has incredible potential to improve our public services, boost productivity and rebuild our economy but, in order to take full advantage, we need to build trust in these systems which are increasingly part of our day to day lives,” Secretary of State for Science, Innovation, and Technology, Peter Kyle.
“The steps I’m announcing today will help to deliver exactly that – giving businesses the support and clarity they need to use AI safely and responsibly while also making the UK a true hub of AI assurance expertise,” said Kyle.
The government said the platform will bring together guidance and new practical resources which sets out clear steps such as how businesses can carry out impact assessments and evaluations, and reviewing data used in AI systems to check for bias.
Further support will see businesses, particularly small and medium-sized enterprises ( SMEs), able to use a self-assessment tool to implement responsible AI management practices in their organisations and make better decisions as they develop and use AI systems.
Rowena Rix, head of innovation and AI at international law firm Dentons, noted that AI assurance systems are welcome, but they need to be backed up by good internal governance systems.
“Businesses can push forward with investment and implementation, even in the face of regulatory uncertainty, by getting good governance controls in place at the outset, ensuring that the use of AI is transparent, compliant, fair, accountable and safe,” said Rix.
“It is important to develop an AI governance structure that anticipates AI risks specific to their business and use cases, as well as evolving legal requirements and regulatory approaches,” said Rix.
“For this reason, the legal team will need to be involved from the outset, so they can play an important role in ensuring that both strategy formulation and implementation are underpinned by a governance structure that is understood at all levels of the business and followed in practice,” said Rix.
Meanwhile the UK government also announced that the AI Safety Institute (AISI) – the world’s first government body dedicated to AI safety, has in the past two months launched their Systemic AI Safety Grants programme, with up to £200,000 of funding available for researchers across academia, industry and civil society.
Both its Chief Technology Officer Jade Leung and Chief Scientist Geoffrey Irving have been named in TIME Magazine’s ‘100 Most Influential People in AI’ list.
The UK AI Safety Institute continues to work closely with international partners, including supporting the first meeting of the International Network of AI Safety Institutes members in San Francisco later this month.
And further strengthening its global reach, the AI Safety Institute has today announced a new AI safety partnership with Singapore. That agreement will see the two institutes work together closely to drive forward research and work towards a shared set of policies, standards, and guidance.
“We are committed to realising our vision of AI for the Public Good for Singapore, and the world,” said Singapore Minister for Digital Development and Information, Josephine Teo.
“The signing of this Memorandum of Cooperation (MoC) with an important partner, the United Kingdom, builds on existing areas of common interest and extends them to new opportunities in AI.”
The UK is not alone is focusing on AI safety.
Last year the US launched its own AI safety institute, while the European Union has enacted an AI Act that is considered among the toughest regulatory regimes for the new technology.
Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…
Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…
Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…
Welcome to Silicon In Focus Podcast: Tech in 2025! Join Steven Webb, UK Chief Technology…
European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…
San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…