The reality of a AI-enabled future may not be around the corner just yet, after Facebook owner Meta shut down an AI demo model after just three days of operation.

Meta “paused” its AI Galactica project that had been released on 15 November, which was designed quickly and accurately answer questions for the scientific community.

It seems that Meta’s Galactica project was closed down amid reports it generated racist, dangerous, and incorrect information.

Image credit: Meta

Meta Galactica

From the start, Meta stated that its Galactica project was “a large language model that can store, combine and reason about scientific knowledge.”

Before it was pulled offline, people could ask the AI to generate a wiki entry, literature review, or research paper on nearly any subject.

Meta’s Galactica project had apparently been trained on 48 million science papers, but the AI project got pulled after The Next Web (TNW) noted some of the dangerous suggestions it had generated.

This included generating well-written research papers on the benefits of committing suicide, practicing antisemitism, why homosexuals are evil and eating crushed glass.

It also reportedly gave (incorrect) instructions on how to make the bomb material napalm in a bathtub.

TNW reporter Tristan Greene noted the development on Twitter.

He also noted he got an opportunity to briefly discuss Galactica with the person responsible for its creation, Meta’s chief AI scientist, Yann LeCun, who rebutted most of his concerns and defended the AI project.

Yann LeCun then confirmed that the Galactica demo is off line for now.

AI experiments

Meta has experimented with AI for some time now, and it has resulted in some concerns.

In October 2021, a Facebook-led artificial-intelligence research project said that it hoped to make machines think more like the people who use them.

The Ego4D programme was apparently training AIs to interact with the world from an “egocentric” or first-person perspective.

Then in August this year, Meta made public the BlenderBot 3 chatbot, opening it up to users who agreed to have their data collected.

The BlenderBot 3 chatbot was designed to conduct free-ranging conversations with users based on factual information.

Meta apparently learned from Microsoft’s Tay chatbot from 2016, which was quickly pulled from public view after users caused it to generate racist and misogynistic responses.

But it seems that BlenderBot 3 criticised Mark Zuckerberg, and it also apparently repeated election misinformation.

It told Business Insider that Zuckerberg was “creepy”, while the BBC was informed that Meta “exploits people for money and (Zuckerberg) doesn’t care”.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Elon Musk’s X Head Of Global Affairs Resigns

X's global affairs head, Nick Pickles, confirms departure after a decade working at the platform…

1 day ago

CMA Halts Probe Into Microsoft’s Inflection AI Staff Hiring

British competition regulator closes investigation into Microsoft's hiring of Inflection AI staff, which it deems…

2 days ago

Telegram’s Pavel Durov Speaks Out Against French Charges

First public response made by Telegram CEO Pavel Durov, after arrest in France over alleged…

2 days ago

US Probes Four-Vehicle Crash Involving AI Driver Assistance

US authorities probe fatal four-vehicle crash caused by Ford Mustang Mach-E electric vehicle using BlueCruise…

3 days ago

Vestager To Step Down As EU Competition Chief

Margrethe Vestager set to step down as EU competition commissioner after a decade in office…

3 days ago

EU Seeks Industry Views On Google DMA Compliance

EU regulators to seek views from industry players on Google's DMA compliance plans ahead of…

3 days ago