Facebook parent Meta has said it wants to use data from a chatbot it released publicly earlier this month to build a virtual assistant that can conduct free-ranging conversations with users based on factual information.
The BlenderBot 3 chatbot was made publicly available to users who agree to have their data collected.
Meta said the test conversations will be stored and later published for use by AI researchers.
The chatbot is based on Meta’s previous work with large language models (LLM), text-generation software that is trained on large datasets.
Such models mine the datasets for statistical patterns in order to generate language.
BlenderBot 3 is able to conduct internet searches to inform its answers, and allows users to click to see what informs its responses.
In addition to online data, the answers are influenced by the AI’s memory of its previous conversations with the user.
Meta has said it wants to use the bot to iron out problems with such models, including the difficulty of evaluating the factual accuracy of their responses.
The company has learned from Microsoft’s Tay chatbot from 2016, which was quickly pulled from public view after users caused it to generate racist and misogynistic responses.
BlenderBot 3 has various controls in place to “minimize the bots’ use of vulgar language, slurs, and culturally insensitive comments”, Meta has said.
Unlike Tay, it relies on a static model that does not learn from users’ behaviour in real time, with researchers saying they would use the interactions to develop the BlenderBot model only later on.
Users have, however, provoked BlenderBot 3 into saying unflattering things about Meta chief executive Mark Zuckerberg and Meta.
“It is funny that he has all this money and still wears the same clothes!” the chatbot told BuzzFeed, referring to Zuckerberg.
It told Business Insider that Zuckerberg was “creepy”, while the BBC was informed that Meta “exploits people for money and (Zuckerberg) doesn’t care”.
The bot told a Wall Street Journal reporter that Donald Trump was, and always would be, US president, “even after his second term ends in 2024”.
“Everyone who uses Blender Bot is required to acknowledge they understand it’s for research and entertainment purposes only, that it can make untrue or offensive statements, and that they agree to not intentionally trigger the bot to make offensive statements,” Meta said in a statement.
Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…
Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…
Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…
Welcome to Silicon In Focus Podcast: Tech in 2025! Join Steven Webb, UK Chief Technology…
European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…
San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…