UK Government Pledges £100m For AI Taskforce

The UK government has announced a taskforce with an initial £100 million in funding to develop artificial intelligence (AI) foundation models, a technology found in OpenAI’s ChatGPT and similar tools, for use across the economy.

Meanwhile a European consumer protection group warned of the risks such tools may pose to consumers and children.

The government said foundation models, including large language models (LLMs) such as that which powers ChatGPT, could be used in healthcare, education and other sectors.

The technology is predicted to raise global GDP by 7 percent over a decade, making its adoption “a vital opportunity” to expand the UK economy, the government said.

Image credit: Matheus Bertelli/Pexels

Commercial opportunities

“To support businesses and public trust in these systems and drive their adoption, the taskforce will work with the sector towards developing the safety and reliability of foundation models, both at a scientific and commercial level,” the government stated.

Prime Minister Rishi Sunak said AI provides “enormous opportunities” to expand the economy, create better-paid jobs and improve healthcare and security.

“By investing in emerging technologies through our new expert taskforce, we can continue to lead the way in developing safe and trustworthy AI as part of shaping a more innovative UK economy,” he said.

Meanwhile, consumer concerns around AI continue to grow after Italy outlawed ChatGPT on data protection grounds earlier this month.

Consumer concerns

The European Consumer Organisation (BEUC) on Monday called on EU consumer protection agencies to investigate the technology and potential harm to the individual.

The organisation set out its concerns in separate letters earlier this month to consumer safety and consumer protection agencies.

It said content produced by chatbots may appear true and reliable but is often factually incorrect, potentially misleading consumers and resulting in deceptive advertising.

Younger consumers and children are more vulnerable to such risks, it said.

“BEUC thus asks you to investigate the risks that these AI systems pose to consumers as a matter of urgency, to identify their presence in consumer markets and to explore what remedial action must be taken to avoid consumer harm,” said BEUC deputy director general Ursula Pachl in the letter to consumer protection agencies and the European Commission.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

UK’s CMA Readies Cloud Sector “Behavioural” Remedies – Report

Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector

53 mins ago

Former Policy Boss At X Nick Pickles, Joins Sam Altman Venture

Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…

3 hours ago

Bitcoin Rises Above $96,000 Amid Trump Optimism

Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…

5 hours ago

FTX Co-Founder Gary Wang Spared Prison

Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…

6 hours ago