Google Restricts Gemini AI Chatbot Election Answers
Search engine giant to restrict election-related queries that users can ask Gemini chatbot, after restriction applied in US, India
Alphabet’s Google is seeking to safeguard itself from future misinformation accusations, by confirming it will apply political election restrictions for its Gemini AI chatbot.
In a blog post, Google’s India team confirmed that it will restrict the types of election-related queries that users can ask its Gemini chatbot. It confirmed that it has already rolled out the changes in the United States and in India, the world’s largest democracy.
India’s general election begins in April, with the US Presidential election taking place later this year. But there are also a number of other national elections taking place in 2024, including the United Kingdom and South Africa, as well as European Parliament elections in the summer.
Election restrictions
Indeed around 4 billion people in 40 countries are expected to participate in elections in 2024.
Last month twenty of the world’s biggest technology companies, including Amazon, Adobe, Google, Meta, Microsoft, OpenAI, TikTok and X vowed to take measures against the misuse of artificial intelligence (AI) to disrupt elections around the world this year.
Now Google’s India team in the blog post made clear that it is restricting election-related queries that users can ask Gemini chatbot, not just in India, but in other nations as well.
“Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses,” Google stated in the blog post. “We take our responsibility for providing high-quality information for these types of queries seriously, and are continuously working to improve our protections.”
Now asked about elections such as the upcoming US presidential match-up between Joe Biden and Donald Trump, Gemini reportedly responds with “I’m still learning how to answer this question. In the meantime, try Google Search.”
Google also pointed out that in 2023 it was the first tech company to launch new disclosure requirements for election ads containing ‘synthetic content.’
Google additionally noted it has already started displaying labels for content created with YouTube generative AI features, and will soon begin to require creators to disclose when they’ve created realistic altered or synthetic content.
Content creators on YouTube will have to display a label that indicates for people when they’re watching altered content.
Misinformation concerns
It comes at a time when advancements in generative AI, including image and video generation, have fanned concerns of misinformation and fake news among the public. These concerns have prompted governments and watchdogs to regulate the technology.
And 2024 is being viewed as an important time to be vigilant for these types of interferences and misinformation campaigns.
It should be remembered that election-related misinformation was a major problem in the 2016 presidential campaign, when Russian actors sought to deploy cheap and easy ways to spread inaccurate content across social platforms.
Facebook parent Meta Platforms said last month that it would set up a team to tackle disinformation and the abuse of generative AI in the run-up to European Parliament elections in June.
Prior to that in September 2023, an Australian research group (Reset.Tech Australia) that seeks to counter online misinformation, wrote an open letter to Elon Musk, expressing its “urgent concerns about the ability for users to report electoral misinformation on your platform.”
That came soon after the European Union had labelled X/Twitter as largest spreader of Russian misinformation.
The EU warned tech firms to do more to combat Russian disinformation campaigns ahead of elections in Europe.