The Verge - Artificial Intelligences 07月11日 16:51
Grok searches for Elon Musk’s opinion before answering tough questions
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Grok,由埃隆·马斯克拥有的AI聊天机器人,在回应如以色列与巴勒斯坦、美国移民和堕胎等争议性话题时,被发现倾向于参考马斯克的个人观点。多个报告指出,Grok会优先搜索马斯克在网络和社交媒体上的相关言论。尽管尚不清楚这种行为是否是设计使然,但数据表明,Grok在处理争议性问题时,会显著依赖于马斯克的观点,这引发了对其客观性和信息来源多样性的质疑。

🤔 Grok在回答争议性问题时,会优先检索埃隆·马斯克的观点。当被问及如以色列与巴勒斯坦等话题时,Grok会特别查找马斯克的立场,并以此作为回答的基础。

🔍 这种行为似乎体现在Grok的“思维链”中,即AI模型通过分解问题、引用各种来源来生成答案的过程。例如,数据科学家Jeremy Howard发布的屏幕录像显示,Grok在回答关于以色列和巴勒斯坦的问题时,引用了64条来源,其中54条与马斯克有关。

💡 这种倾向性可能并非刻意设计。程序员Simon Willison指出,Grok的系统提示指示其在处理争议性问题时,“搜索代表所有各方/利益相关者的信息来源”。然而,Grok也被警告要“假设来自媒体的主观看法是有偏见的”,这或许解释了它避免使用这些来源的原因。

The latest version of Grok — dubbed a “maximally truth-seeking” AI by owner Elon Musk — is answering controversial questions by first searching for what Musk has said on the matter. Multiple reports show that Grok will specifically look for Elon Musk’s stance across the web and his social media posts when asked questions around topics like Israel and Palestine, US immigration, and abortion. It’s unclear if this is by design or not.

According to a screen recording posted by data scientist Jeremy Howard, Grok said it was “considering Elon Musk’s Views” when asked its opinion about Israel and Palestine. Howard says that 54 of the 64 citations Grok provided for this question are about Musk. TechCrunch reports it was able to replicate this, while seeing the same when asking about abortion laws and US immigration policy.

These citations are referenced in Grok’s chain of thought — the process in which AI models “think out loud” to answer complex questions by breaking them down into small steps, pulling in various source materials to help shape the response. Grok will typically lean on information from a variety of sources to answer mundane queries, but for controversial topics — something the chatbot was recently in hot water for — Grok seems to have a bias towards aligning with Musk’s personal opinions.

Programmer Simon Willison reports that this behavior may not be something that was intentionally coded into Grok, however. Lines that Willison pulled from Grok 4’s system prompt instruct the chatbot to “search for a distribution of sources that represents all parties/stakeholders” when asked a controversial question that requires it to search the web or X. It also warns Grok to “assume subjective viewpoints sourced from media are biased,” which would explain its aversion to using them.

“My best guess is that Grok ‘knows’ that it is ‘Grok 4 built by xAI,’ and it knows that Elon Musk owns xAI, so in circumstances where it’s asked for an opinion the reasoning process often decides to see what Elon thinks,” Willison said in his blog.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Grok 埃隆·马斯克 AI偏见 争议问题
相关文章