少点错误 前天 06:16
Call on AI Companies: Publish Your Whistleblowing Policies
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

一项由30多个吹哨和AI组织发起的运动,呼吁AI公司加强内部吹哨者保护的透明度。文章指出,多数前沿AI公司在公开吹哨政策和系统透明度方面落后于全球标准,例如6家公司中有5家未发布其吹哨政策。这使得公众和AI内部员工难以了解相关保护措施。该运动要求AI公司公开其内部吹哨政策(Level 1)以及系统绩效、有效性和结果的报告(Level 2)。文章强调,内部渠道是发现AI发展中潜在风险的关键途径,提高透明度有助于建立“竞相改进”的良性循环。文章还列举了AI公司在政策透明度、员工意识、信任度和历史报复案例等方面存在的问题,并呼吁AI公司公开相关信息,以建立更健全的内部风险报告文化。

🌐 **AI公司内部吹哨系统透明度亟待提升**:目前大多数前沿AI公司未能公开其内部吹哨政策,导致公众和员工对公司处理内部担忧的机制缺乏了解,必须依赖“信任”而非可验证的信息。超过30个组织联合呼吁AI公司公开政策和绩效报告,以建立更透明的问责机制。

💡 **内部吹哨者是发现AI风险的关键**:鉴于AI开发的复杂性和专有性,内部员工是识别潜在危险发展、不当行为或安全疏忽的最关键群体。内部渠道通常是员工首次尝试解决问题的途径,因此其有效性和可靠性至关重要,尤其是在处理非显而易见但可能重大的风险时。

🔍 **当前AI公司吹哨系统存在不透明和信任赤字**:多数AI公司(如Anthropic、Google DeepMind、xAI、Mistral)未公开其吹哨政策,而OpenAI仅在压力下公开了政策。此外,公司普遍缺乏关于系统有效性或结果的数据披露。内部访谈显示,员工普遍不了解、不信任或怀疑其公司的吹哨系统,担心潜在的间接后果而非直接报复。

📈 **透明度是改进AI系统和保护员工的基石**:公开吹哨政策和相关数据能让公众评估系统是否真正保护了提出安全担忧的员工,也便于员工在职业选择时比较不同公司的保护措施。此外,透明度还能促使公司改进系统,提升员工意识和信任度,形成良好的企业文化,这符合商业利益。

📢 **呼吁AI公司公开政策并披露数据**:运动要求AI公司执行两级透明度:第一级是公开完整的吹哨政策,明确保护范围、报告渠道、调查程序、保护措施和独立性保证;第二级是公开绩效数据,包括报告数量、解决情况、报复投诉、员工满意度以及系统改进和审计结果。

Published on July 31, 2025 10:04 PM GMT

Announcing a coalition of +30 whistleblowing and AI organizations, calling for stronger transparency on company-internal whistleblower protections. 

Rating of whistleblower system transparency of major AI companies. Details below.
*Please note that AIWI only evaluates the transparency of the policy and outcome reporting—not the content or quality of the underlying system, protections, culture, or past patterns of retaliation.

Frontier AI companies currently lag global standards and best practices in creating adequate transparency around their internal whistleblowing systems: 5 out 6 companies in our target set do not even publish their whistleblowing policies. 

That means we, the public, and employees in AI, are forced to 'trust' that companies will address concerns well internally. 

This is far from good enough 

...and why we, at the National Whistleblower Day Event in Washington DC yesterday, launched a campaign asking AI companies to publish their internal whistleblowing policies ("Level 1") and reports on their whistleblowing system performance, effectiveness, and outcomes ("Level 2"). 

We are very proud of the coalition we have the privilege of representing here - uniting most of the world's most prominent whistleblowing organizations and scholars with equally prominent AI counterparts.

See further below a full list of signatories or our campaign page

This Post

You can find the actual campaign page, including evidence and sources, here: https://publishyourpolicies.org/

In this post I'll share the same message with a slightly altered 'storyline'. 

Why This Matters Now

I don't have to make the case here for why we should care about the way AI companies go about development and deployment of their frontier models - especially over the coming years. 

Likewise, if you've seen righttowarn, you're likely aware of this line of reasoning: Many risks will only be visible to insiders. The current black-box nature of AI development means employees are often the first—and potentially only—people positioned to spot dangerous developments, misconduct, or safety shortcuts.

It therefore matters that AI companies build up the infrastructure required to address concerns raised today already and that we can enter a 'race to the top' on system quality as soon as possible.

Transparency on internal whistleblowing systems, allowing for public feedback and empowering employees to understand and compare protections is the mechanism to enter that 'race to the top' mechanism.

Important note 1: We are talking about company internal whistleblowing systems here (although they can extend arbitrarily far in terms of 'covered persons', e.g. to suppliers, customers, etc.). This does NOT diminish the importance of legal protections for AI whistleblowers or independent support offerings for insiders. 
But the reality is (see below) that we expect the majority of risks to be flagged internally first. That means internal channels are critical and must not be neglected. If you like the 'swiss cheese mode' of risk management - we want to make sure protections are as strong as possible at every level  

Important note 2: Both in this post and our main post, we are not evaluating policy or system quality. We only talk about the degree of transparency provided.  

The Case for Transparency

1. Insiders Are Uniquely Positioned

Current and former AI employees have recognized that they are "among the few people who can hold [companies] accountable to the public." They've called for companies to "facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company's board, to regulators, and to an appropriate independent organization."

Research consistently shows that employees are often the first to recognize potential wrongdoing or risk of harm. In AI specifically, the technical complexity and proprietary nature of development means many risks are only visible to those with internal access.

2. Internal Channels Are a Major Path

Data from the SEC Whistleblower Program shows that three-quarters of award recipients initially attempted to address concerns within their organizations before seeking external remedies. Employees naturally try internal channels first, and we expect this to be no different in frontier AI companies: 

    Nature of work: Research & Engineering work relies on discussion. It is standard practice for concerns to be escalated internally - especially if they are less 'clear cut' and independently identifiable as e.g. accounting fraud or bribery (which however are still in scope of whistleblowing policies).  Culture: Addressing concerns internally first is a common part of many Silicon Valley organizations. 

This means that these systems must work reliably: When internal systems fail, we all lose. Companies miss opportunities to address problems early, employees face unnecessary risks, and the public remains unaware of safety issues until they potentially become crises.

3. Current Systems Are Opaque and Potentially Broken

Major AI companies have not published their whistleblowing policies. The recent Future of Life Institute AI Safety Index highlighted that Anthropic, Google DeepMind, xAI, and Mistral lack public whistleblowing policies, making neutral assessment impossible. They, likewise, call for the publication of policies. 

OpenAI is the sole exception—and they only published their policy following public pressure over their restrictive non-disparagement clauses. Even then, none of the major AI companies publish effectiveness metrics or outcome data.

This stands in stark contrast to other industries. Companies across sectors routinely publish whistleblowing policies—from AI-related organizations like ASML to industrial firms like Tata Steel to financial services companies. Many also publish regular effectiveness evaluations and outcome statistics.

Conversations with insiders also reveal gaps:

Employee Awareness: Interviews with current and former frontier AI company insiders show that many employees don't know, understand, or trust their companies' internal reporting systems. As one insider told us: "I'm not well-informed about our company's whistleblowing procedures (and it feels uncomfortable to inquire about them directly)."

Trust Deficit: AI employees suspect that making reports would be ineffective or could make their work lives more difficult. Another insider shared: "I anticipate that using official reporting channels would likely result in subtle, indirect consequences rather than overt retaliation like termination."

History of Retaliation: AI companies have attempted to suppress individuals voicing concerns (OpenAI's restrictive NDAs) and have faced cases around alleged wrongful termination for speaking up on research misconduct (Google). 

We also have good reason to believe that multiple companies' internal whistleblowing policies are currently in violation of the EU Whistleblowing Directive. If you are interested: Happy to provide details via DM. 

It might still be the case that certain systems are working relatively well today (at least for one of the organizations in the set we have an 'okay' impression based on conversations with individuals) - but the reality is that neither insiders nor we know. 

Every insider we have spoken to to date supports the publication of whistleblowing policies. If you are an insider and you don't - please reach out and share your thoughts with us (or comment below).

4. Transparency Enables Verification and Improvement

Without published policies & outcome transparency, the public cannot assess whether internal systems actually protect employees who raise safety concerns. 

Employees cannot compare protections across companies when making career decisions. 

Policymakers cannot identify coverage gaps or craft appropriate regulations.

Companies benefit from improved systems through public feedback and heightened employee awareness. Empirical evidence shows that there is a strong 'business case' for improved speak up cultures and whistleblowing systems - from improved innovation to increased employee loyalty. This is why, for example, shareholder representatives have called on Google to improve its whistleblowing systems

5. This information vacuum serves no legitimate purpose

We are only calling for transparency: This should create no major workload for companies. If it does: Then maybe that means there were things to be improved upon). 

Whistleblowing policies contain procedural frameworks and legal guarantees—not trade secrets or competitive advantages. There's no business case for secrecy, but substantial evidence for the benefits of transparency.

If companies truly care about developing a strong speak-up culture and protecting those who live it: Publish. Your. Policies.  

What We're Asking For

We're calling on AI companies to meet two levels of transparency [this is an excerpt - see campaign page for details]:

Level 1: Policy Transparency (minimum baseline)

Level 2: Effectiveness Transparency (what companies should strive for)

Companies that take whistleblowing seriously should already gather this data for continuous improvement. 

Publication is simply a matter of transparency.

The Coalition

This call is supported by a broad coalition of scholars, AI safety organizations, and whistleblowing advocacy groups:

Organizations:

Academic Signatories:

Moving Forward

This campaign offers an opportunity for AI companies to demonstrate commitment to integrity cultures where flagging risks is a normal and expected responsibility.

We're not asking companies to reveal competitive secrets—we're asking them to show they're serious about the concern systems they claim to have. Transparency costs nothing but builds everything.

The stakes are too high for "trust us" to be enough. When AI companies publicly acknowledge existential risks, they must also demonstrate that employees can safely report concerns about those risks.

What you can do

If you believe our call is sensible and you are...

    An insider at an AI Company: Ask your management why they are not publishing their policies. Share our call with them. A leader of an AI Company: You can lead the charge! A strong speak up culture benefits your employees, shareholders, and you (unless you'd prefer risks to be hidden until it's too late): We can be in the same boat - if you genuinely care about protecting those speaking up. 
    If you credibly commit to Level 2: We will commend you for it.An outsider: Spread the word. Every share gets us closer to transparency and a world where insiders in AI can raise their concerns as they see them.
    We might also announce a second round of signatories. Contact us if you would like to be on this list. 

Join the campaign: https://aiwi.org/publishyourpolicies/

Contact: For questions or to add your organization's support, reach out through the campaign website.

This campaign is led by The AI Whistleblower Initiative (AIWI, formerly OAISIS), an independent, nonpartisan, nonprofit organization supporting whistleblowers in AI.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI公司 吹哨者保护 透明度 AI安全 内部举报
相关文章