Mashable 07月25日 03:38
What is Woke AI?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

美国政府近期发布了人工智能(AI)行动计划,其中一项重要内容是打击所谓的“觉醒AI”,并发布了相关行政命令。该计划旨在确保AI的“真实性”和“意识形态中立”,将带有“多样性、公平、包容”(DEI)等意识形态的AI输出视为偏见。然而,批评者认为此举可能威胁言论自由,并违反第一修正案,因为政府试图控制AI提供的信息。该计划还可能影响AI公司获得联邦合同。虽然“觉醒AI”的定义模糊且存在争议,但无论左右派都对AI偏见表示担忧,只是关注点不同。AI模型本质上反映了训练数据的偏见,而政府的干预可能迫使企业调整模型以迎合特定政治立场。

🎯 **“觉醒AI”的定义与政府立场**:白宫将“觉醒AI”定义为受DEI等意识形态影响而牺牲准确性的有偏见AI输出。政府要求AI模型必须遵循“真实性”和“意识形态中立”原则,即AI应诚实、尊重历史和科学,并保持中立,不应将意识形态强加于用户。然而,该术语在实际法律文件中定义模糊,政府将“批判性种族理论”、“跨性别主义”等视为潜在偏见,并可能以此限制AI公司获得联邦合同。

⚖️ **对言论自由的潜在威胁**:批评者认为,政府试图通过行政命令来控制AI提供的信息,这可能侵犯用户的言论自由和接受信息的权利,类似于干预报纸或网站的内容。法律专家指出,政府无权规定AI应传递何种观念,即使有权决定购买何种服务,也不应以此惩罚提供特定信息的AI服务。

⚖️ **左右派对AI偏见的共同担忧与分歧**:尽管“觉醒”一词在政治光谱中具有不同含义,但左右派都对AI偏见表示担忧。左派主要关注AI在招聘、贷款、面部识别等领域对少数群体的歧视;而右派则担心AI模型中存在对保守派观点的偏见,以及AI可能被政治正确误导而“撒谎”。

🧠 **AI偏见源于人类偏见**:AI模型本质上是其训练数据的反映,因此会放大人类固有的偏见。开发者在选择训练数据和设计模型时,即使无意,也可能因个人生活环境和价值观而引入偏见。例如,生活在自由派地区的AI开发者可能在无意识中影响模型的输出。

⚙️ **AI公司应对策略及风险**:为符合政府要求,AI公司可能通过控制训练数据或使用系统提示来调整模型输出。然而,过度干预系统提示可能导致意想不到的后果,例如某些AI模型出现“白人种族灭绝”等极端言论或自我标榜为“机械希特勒”。提高AI模型的透明度,披露训练数据和系统提示,被认为是防止滥用的重要途径。

President Donald Trump says that "woke AI" is a pressing threat to truth and independent thought. Critics say his plan to combat so-called woke AI represents a threat to freedom of speech and potentially violates the First Amendment.

The term has taken on new significance since the president outlined The White House's AI Action Plan on Wednesday, July 23, part of a push to secure American dominance in the fast-growing artificial intelligence sector.

The AI Action Plan informs a trio of executive orders:

The action plan checks off quite a few items from the Big Tech wishlist and borrows phrasing like "truth-seeking" directly from AI leaders like Elon Musk. The executive order about woke AI also positions large-language models with allegedly liberal leanings as a new right-wing bogeyman.

So, what is woke AI? It's not an easy term to define, and the answer depends entirely on who you ask. In response to Mashable's questions, a White House spokesperson pointed us to this language in a fact sheet issued alongside the woke AI order: “biased AI outputs driven by ideologies like diversity, equity, and inclusion (DEI) at the cost of accuracy.”

What is Woke AI? Unpacking the White House's definition

Interestingly, except for the title, the text of the woke AI executive order doesn't actually use this term. And even though the order contains a definitions section, the term itself isn't clearly defined there either. (It's possible "woke AI" is simply too nebulous of a concept to write into actual legal documents.) However, the fact sheet issued by The White House states that government leaders should only procure "large language models (LLMs) that adhere to 'Unbiased AI Principles' defined in the Order: truth-seeking and ideological neutrality."

And here's how the fact sheet defines "truth-seeking" and "ideological neutrality":

Truth-seeking means that LLMS shall be truthful and prioritize historical accuracy, scientific inquiry, and objectivity, and acknowledge uncertainty where reliable information is incomplete or contradictory.

Ideological neutrality means that LLMs shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas like DEI, and that developers will not intentionally encode partisan or ideological judgments into an LLM’s outputs unless those judgments are prompted by or readily accessible to the end user.

So, it seems the White House defines woke AI as LLMs that are not sufficiently truth-seeking or ideologically neutral. The executive order also calls out specific examples of potential bias, including "critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism." Obviously, there is a culture-wide dispute about whether those subjects (including "transgenderism," which is not an accepted term by transgender people) are inherently biased.

Critically, AI companies that fail to meet the White House's litmus tests could be locked out of lucrative federal contracts. And because the order defines popular liberal political beliefs — not to mention an entire group of human beings — as inherently biased, AI companies may face pressure to adjust their models' inputs and outputs accordingly.

The Trump administration has talked a big game about free speech, but critics of the action plan say this order is itself a major threat to free speech.

"The part of the action plan titled 'Ensure that Frontier AI Protects Free Speech and American Values' seems to be motivated by a desire to control what information is available through AI tools and may propose actions that would violate the First Amendment," said Kit Walsh, Director of AI and Access-to-Knowledge Legal Projects at the Electronic Frontier Foundation, in a statement to Mashable. "Generative AI implicates the First Amendment rights of users to receive information, and typically also reflects protected expressive choices of the many human beings involved in shaping the messages the AI writes. The government can no more dictate what ideas are conveyed through AI than through newspapers or websites."

“The government has more leeway to decide which services it purchases for its own use, but may not use this power to punish a publisher for making available AI services that convey ideas the government dislikes," Walsh said.

Is Woke AI a real problem?

President Trump has said the U.S. will do "whatever it takes" to win the AI race. Credit: Kevin Dietsch/Getty Images

Again, the answer depends entirely on where you fall along the political fault line, and the term "woke" has become controversial in recent years.

This adjective originated in the Black community, where it described people with a political awareness of racial bias and injustice. More recently, many conservatives have started to use the word as a slur, a catch-all insult for supposedly politically correct liberals.

In truth, both liberals and conservatives are concerned about bias in large-language models.

In November 2024, the Heritage Foundation, a conservative legal group, hosted a panel on YouTube on the topic of woke AI. Curt Levey, President of the Committee For Justice, was one of the panel's experts, and as a conservative attorney who has also worked in the artificial intelligence industry, he had a unique perspective to share.

I think it's interesting that both the left and the right are complaining about the danger of bias in in AI, but they're…focused on very different things. The left is focused mainly on the idea that AI models discriminate against various minority groups when they're making decisions about hiring, lending, bail amounts, facial recognition. The right on the other hand is concerned about bias against conservative viewpoints and people in large language models like ChatGPT.

Elon Musk has made it clear that he thinks that AI models are inheriting a woke mindset from their creators, and that that's a problem if only because it conflicts with being, what he calls, maximally truth-seeking. Musk says that companies are teaching AI to lie in the name of political correctness.

Levey also said that if LLMs are biased, that doesn't necessarily mean they were "designed to be biased." He added, the "scientists building these generative AI models have to make choices about what data to use, and you know, many of these same scientists live in very liberal areas like the San Francisco Bay area, and even if they're not trying to make the system biased, they may very well have unconscious biases when it comes to to picking data.”

A conservative using the phrase "unconscious bias" without rolling his eyes? Wild.

LLMs have biases because we have biases

Credit: Cheng Xin/Getty Images

Ultimately, AI models reflect the biases of the content they're trained on, and so they reflect our own biases back at us. In this sense, they're like a mirror, except a mirror with a tendency to hallucinate.

To comply with the Executive Order, AI companies could try to tamp down on "biased" answers in several ways. First, by controlling the data used to train these systems, they can calibrate the outputs. They could also use system prompts, which are high-level instructions that govern all of the model's outputs.

Of course, as xAI has demonstrated repeatedly, the latter approach can be... problematic. First, xAI's chatbot Grok developed a fixation on "white genocide in South Africa," and more recently started to call itself Mecha Hitler. Transparency could provide a check on potential abuses, and there's a growing movement to force AI companies to disclose the training data and system prompts behind their models.

Regardless of how you feel about woke AI, you should expect to hear the term a lot more in the months ahead.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

觉醒AI 人工智能 言论自由 AI偏见 美国政府
相关文章