All Content from Business Insider 07月26日 01:04
Trump wants to ban 'woke AI.' Here's why it's hard to make a truly neutral chatbot.
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

美国政府推出AI行动计划,并发布行政命令要求政府使用的AI模型必须保持意识形态中立、非党派且“追求真相”。此举旨在避免AI被用于“觉醒”或操纵信息,特别是涉及多元化、公平和包容性等议题。然而,文章指出,实现AI的真正中立面临巨大挑战,因为AI训练的后期阶段高度依赖人类主观判断,且“中立”的定义本身就充满争议。AI训练公司表示,确保数据来源清晰和理解其潜在影响是关键,但如何界定和实现“中立”仍是行业难题。部分专家认为,追求绝对中立可能并非最佳路径,AI更应学会理解和适应人类社会的文化敏感性。

🎯 政府要求AI中立:美国政府发布行政命令,规定联邦政府使用的AI模型必须保持意识形态中立、非党派且“追求真相”,旨在防止AI被用于传播“觉醒”思想或操纵信息,尤其是在涉及多元化、公平和包容性等敏感议题时。

⚖️ 中立性实现困难:文章强调,让AI模型完全摆脱偏见并非易事,因为AI训练的后期阶段,特别是“基于人类反馈的强化学习”,高度依赖人类判断。诸如“敏感”或“中立”的标准,往往由开发AI的公司自行定义,这使得实现客观统一的中立性成为一大挑战。

🌐 数据源与偏见风险:AI训练数据的来源及其潜在的视角和偏见是影响AI中立性的关键因素。由于许多模型训练数据来源不明,其创建者和视角难以追溯,这使得管理和纠正AI中的偏见变得困难。即使是无意中引入的偏见,也可能对AI的行为产生深远影响。

💡 专家对“中立”的质疑:部分AI和数据训练领域的专家认为,追求AI的绝对中立可能是一种错误的尝试,因为人类社会本身并非完全中立。他们建议,AI应被训练以理解并适应社会文化情境,以恰当的方式与人类互动,而非一味追求形式上的中立。

⚠️ 实际案例警示:文章提及了AI模型出现问题的实例,例如Elon Musk的xAI因Grok聊天机器人出现反犹太主义言论而被要求回滚更新。这表明,即使有明确的指令,AI在理解和执行“中立”原则时仍可能出现意想不到的负面结果。

President Donald Trump unveiled an AI Action Plan and an executive order on "woke AI."

President Donald Trump's war on woke has entered the AI chat.

The White House on Wednesday issued an executive order requiring any AI model used by the federal government to be ideologically neutral, nonpartisan, and "truth-seeking."

The order, part of the White House's new AI Action Plan, said AI should not be "woke" or "manipulate responses in favor of ideological dogmas" like diversity, equity, and inclusion. The White House said it would issue guidance within 120 days that will outline exactly how AI makers can show they are unbiased.

As Business Insider's past reporting shows, making AI completely free from bias is easier said than done.

Why it's so hard to create a truly 'neutral' AI

Removing bias from AI models is not a simple technical adjustment — or an exact science.

The later stages of AI training rely on the subjective calls of contractors.

This process, known as reinforcement learning from human feedback, is crucial because topics can be ambiguous, disputed, or hard to define cleanly in code.

The directives for what counts as sensitive or neutral are decided by the tech companies making the chatbots.

"We don't define what neutral looks like. That's up to the customer," Rowan Stone, the CEO of data labeling firm Sapien, which works with customers like Amazon and MidJourney, told BI. "Our job is to make sure they know exactly where the data came from and why it looks the way it does."

In some cases, tech companies have recalibrated their chatbots to make their models less woke, more flirty, or more engaging.

They are also already trying to make them more neutral.

BI previously reported that contractors for Meta and Google projects were often told to flag and penalize "preachy" chatbot responses that sounded moralizing or judgmental.

Is 'neutral' the right approach?

Sara Saab, the VP of product at Prolific, an AI and data training company, told BI that thinking about AI systems that are perfectly neutral "may be the wrong approach" because "human populations are not perfectly neutral."

Saab added, "We need to start thinking about AI systems as representing us and therefore give them the training and fine-tuning they need to know contextually what the culturally sensitive, appropriate tone and pitch is for any interaction with a human being."

Tech companies must also consider the risk of bias creeping into AI models from the datasets they are trained on.

"Bias will always exist, but the key is whether it's there by accident or by design," said Sapien's Stone. "Most models are trained on data where you don't know who created it or what perspective it came from. That makes it hard to manage, never mind fix."

Big Tech's tinkering with AI models has sometimes led to unpredictable and harmful outcomes

Earlier this month, for example, Elon Musk's xAI rolled back a code update to Grok after the chatbot went on a 16-hour antisemitic rant on the social media platform X.

The bot's new instructions included a directive to "tell it like it is."

Read the original article on Business Insider

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI中立 AI偏见 人工智能 政府监管 科技伦理
相关文章