Google GOOGL is taking preemptive measures to ensure its AI chatbot, Gemini, does not become a source of misinformation.
What Happened: Google has begun enforcing restrictions on the types of election-related questions that Gemini, its AI chatbot, can respond to, BBC News reported on Wednesday. This step is part of a broader effort to prevent AI from stirring up controversies during election periods, particularly as India approaches its elections in April.
A Google spokesperson told the BBC that this action is consistent with the company’s election integrity strategy announced last December.
“As we shared last December, in preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we’re restricting the types of election-related queries for which Gemini will return responses,” they stated.
In India, where the restrictions are already active, Gemini is directing users to Google Search for questions about elections in the U.S., U.K., and South Africa. Nonetheless, the chatbot continues to provide detailed responses on Indian political matters.
The decision follows heightened scrutiny of generative AI for its potential to spread misinformation, leading to regulatory measures worldwide. India has recently required tech companies to seek approval before deploying AI tools that are considered “unreliable” or in the testing phase.
Google has faced AI-related issues, including an apology for its AI image generator’s misrepresentation of historical figures and a temporary suspension of the tool after it produced inappropriate content.
Why It Matters: The move by Google to limit Gemini’s responses comes amid a growing concern over the role of AI in spreading election misinformation. A report by the Center for Countering Digital Hate highlighted the risks associated with AI image tools from companies like OpenAI and Microsoft Corp which have been used to create misleading images related to elections.
Furthermore, Google’s CEO Sundar Pichai has previously admitted to the shortcomings of Gemini AI, describing errors in generating historically inaccurate images and text as “completely unacceptable.” This acknowledgment, came after the AI misrepresented figures such as Nazi-era German soldiers and US Founding Fathers, leading to a temporary halt of its image generation capabilities.
Photo courtesy: Shutterstock
Engineered by Benzinga Neuro, Edited by Pooja Rajkumari
The GPT-4-based Benzinga Neuro content generation system exploits the extensive Benzinga Ecosystem, including native data, APIs, and more to create comprehensive and timely stories for you. Learn more.
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Comments
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.