Google's Gemini chatbot will “introduce limits on the types of election-related queries” the AI chatbot will respond to. The news comes after a surge in content generated by artificial intelligence (AI) sparked major concerns about disinformation during the election, according to a CNBC article.
Google restricts Gemini chatbot from responding to election questions
Google has announced that it will limit the types of election-related questions users can pose to the Gemini chatbot, according to a company blog post. He also said that coordination has already been implemented in the United States and India. These countries are gearing up for elections scheduled for this spring. A Google representative who spoke to CNBC said the adjustments are consistent with the company's intended election strategy.
Related article: CPI report reaction: Market bets on Fed rate cuts fall as US inflation rises
AI-related misinformation poses a threat to elections
According to a recent report from the World Economic Forum, 4.2 billion people will vote in 2024. With such huge numbers ready to vote, misinformation and deception could be used in place of the intended democratic process in the AI era. Given the “post-truth era,” electoral procedures, and the recent explosion of generative AI, technology companies, governments, and media need to think carefully about how they can support democratic processes. According to the Global Risks Report 2024, misinformation and disinformation pose the greatest threat to stability because they cast doubt on the legitimacy of election results.
Google's Gemini chatbot faces reliability hurdles
Criticism of AI businesses has increased in recent months. The main cause of the problem is many concerns about “accuracy” and “reliability”. Microsoft's AI system recently came under fire after an employee claimed the system was producing “disturbing images.” Furthermore, the allegations are consistent with the views of Google's critics. Google was accused of abusing its Gemini AI tool to create false photos. Google subsequently stopped collecting images of individuals in response to complaints. The main question raised was how the Gemini AI engine processes images according to racial categories. According to reports, Google has confirmed that it is working on fixing recent issues with Gemini's image generation feature. After that, the company decided to stop photographing people.
Related article: VanEck's bold fee cuts drive $119 million in inflows to Bitcoin ETF