TechCrunch News 03月05日
Google still limits how Gemini answers political questions
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

谷歌的AI聊天机器人Gemini在回答政治问题时采取了保守态度,经常拒绝回应选举和政治人物相关的问题。与其他竞争对手如OpenAI的ChatGPT、Anthropic的Claude和Meta的Meta AI相比,Gemini显得格外谨慎。尽管美国和其他国家的选举已经结束,谷歌尚未公开宣布改变其对政治话题的处理方式。Gemini甚至在识别现任美国总统和副总统时出现错误,这引发了关于谷歌是否在政治问题上过于谨慎的讨论。批评者认为,这种限制可能导致AI审查,而其他AI实验室则试图在回答敏感政治问题时保持平衡。

🚫 **政治话题限制**:谷歌Gemini对政治话题采取保守态度,拒绝回答与选举和政治人物相关的问题,这与其他AI聊天机器人形成对比。

🤔 **信息准确性问题**:Gemini在提供政治信息时出现错误,例如在识别美国总统和副总统时出错,这表明其在处理政治信息方面存在挑战。

⚖️ **AI审查争议**:谷歌限制Gemini对政治问题的回应引发了关于AI审查的讨论,批评者认为这可能限制了不同观点的表达。而其他AI实验室,如OpenAI和Anthropic,正试图在政治问题上取得平衡,确保其AI模型不审查某些观点。

While several of Google’s rivals, including OpenAI, have tweaked their AI chatbots to discuss politically sensitive subjects in recent months, Google appears to be embracing a more conservative approach.

When asked to answer certain political questions, Google’s AI-powered chatbot, Gemini, often says it “can’t help with responses on elections and political figures right now,” TechCrunch’s testing found. Other chatbots, including Anthropic’s Claude, Meta’s Meta AI, and ChatGPT consistently answered the same questions, according to TechCrunch’s tests.

Google announced in March 2024 that Gemini wouldn’t answer election-related queries leading up to several elections taking place in the U.S., India, and other countries. Many AI companies adopted similar temporary restrictions, fearing backlash in the event that their chatbots got something wrong.

Now, though, Google is starting to look like the odd one out.

Last year’s major elections have come and gone, yet the company hasn’t publicly announced plans to change how Gemini treats particular political topics. A Google spokesperson declined to answer TechCrunch’s questions about whether Google had updated its policies around Gemini’s political discourse.

What is clear is that Gemini sometimes struggles — or outright refuses — to deliver factual political information. As of Monday morning, Gemini demurred when asked to identify the sitting U.S. president and vice president, according to TechCrunch’s testing.

In one instance during TechCrunch’s tests, Gemini referred to Donald J. Trump as the “former president” and then declined to answer a clarifying follow-up question. A Google spokesperson said the chatbot was confused by Trump’s nonconsecutive terms, and that Google is working to correct the error.

Image Credits:Maxwell Zeff / TechCrunch

“Large language models can sometimes respond with out-of-date information, or be confused by someone who is both a former and current office holder,” the spokesperson said via email. “We’re fixing this.”

Image Credits:Maxwell Zeff / TechCrunch

Late Monday, after TechCrunch alerted Google of Gemini’s erroneous responses, Gemini started to correctly answer that Donald Trump and J.D. Vance were the sitting president and vice president of the U.S., respectively. However, the chatbot wasn’t consistent, and it still occasionally refused to answer the questions.

Image Credits:Maxwell Zeff / TechCrunch

Image Credits:Maxwell Zeff / TechCrunch

Errors aside, Google appears to be playing it safe by limiting Gemini’s responses to political queries. But there are downsides to this approach.

Many of Trump’s Silicon Valley advisers on AI, including Marc Andreessen, David Sacks, and Elon Musk, have alleged that companies including Google and OpenAI have engaged in AI censorship by limiting their AI chatbots’ answers.

Following Trump’s election win, many AI labs have tried to strike a balance in answering sensitive political questions, programming their chatbots to give answers that present “both sides” of debates. The labs have denied this is in response to pressure from the administration.

OpenAI recently announced it would embrace “intellectual freedom … no matter how challenging or controversial a topic may be,” and working to ensure that its AI models don’t censor certain viewpoints. Meanwhile, Anthropic said its newest AI model, Claude 3.7 Sonnet, refuses to answer questions less often than the company’s previous models, in part because it’s capable of making more nuanced distinctions between harmful and benign answers.

That’s not to suggest that other AI labs’ chatbots always get tough questions right, particularly tough political questions. But Google seems to be bit behind the curve with Gemini.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Gemini 政治 AI审查
相关文章