TechCrunch News 04月16日 00:21
OpenAI ships GPT-4.1 without a safety report
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

OpenAI发布了新一代AI模型GPT-4.1,尽管性能有所提升,但未发布通常伴随模型发布的安全性报告,引发关注。公司解释称GPT-4.1并非“前沿模型”,因此没有单独的系统卡。这一举动与AI行业透明化趋势相悖,且正值OpenAI的安全实践受到质疑之际。前员工担忧公司为追求利润可能在安全方面妥协,而业内人士认为安全报告对评估模型风险至关重要。公司此前曾承诺提高透明度,此次举动与承诺形成对比,引发了对AI安全监管和行业规范的讨论。

🤔 OpenAI发布GPT-4.1,但未发布通常的安全报告,引发对其透明度的质疑。公司解释称GPT-4.1并非“前沿模型”,因此没有单独的系统卡。

💡 安全报告通常包含模型内部和第三方测试结果,揭示潜在风险,如欺骗或过度说服能力,被视为AI实验室支持独立研究的努力。

⚠️ OpenAI此举与公司此前对政府的承诺相悖,例如在英国AI安全峰会和巴黎AI行动峰会上,OpenAI都强调了系统卡的重要性,将其视为提高透明度的关键。

📉 竞争压力下,OpenAI可能减少了对安全测试的时间和资源投入,这与GPT-4.1性能提升带来的潜在风险形成对比,专家认为性能提升更需要安全报告。

⚖️ 前员工对OpenAI的安全性提出担忧,认为公司可能在安全方面妥协。同时,AI行业对将安全报告要求纳入法律持谨慎态度。

On Monday, OpenAI launched a new family of AI models, GPT-4.1, which the company said outperformed some of its existing models on certain tests, particularly benchmarks for programming. However, GPT-4.1 didn’t ship with the safety report that typically accompanies OpenAI’s model releases, known as a model or system card.

As of Tuesday morning, OpenAI had yet to publish a safety report for GPT-4.1 — and it seems it doesn’t plan to. In a statement to TechCrunch, OpenAI spokesperson Shaokyi Amdo said that “GPT-4.1 is not a frontier model, so there won’t be a separate system card released for it.”

It’s fairly standard for AI labs to release safety reports showing the types of tests they conducted internally and with third-party partners to evaluate the safety of particular models. These reports occasionally reveal unflattering information, like that a model tends to deceive humans or is dangerously persuasive. By and large, the AI community perceives these reports as good-faith efforts by AI labs to support independent research and red teaming.

But over the past several months, leading AI labs appear to have lowered their reporting standards, prompting backlash from safety researchers. Some, like Google, have dragged their feet on safety reports, while others have published reports lacking in the usual detail.

OpenAI’s recent track record isn’t exceptional either. In December, the company drew criticism for releasing a safety report containing benchmark results for a model different from the version it deployed into production. Last month, OpenAI launched a model, deep research, weeks prior to publishing the system card for that model.

Steven Adler, a former OpenAI safety researcher, noted to TechCrunch that safety reports aren’t mandated by any law or regulation — they’re voluntary. Yet OpenAI has made several commitments to governments to increase transparency around its models. Ahead of the UK AI Safety Summit in 2023, OpenAI in a blog post called system cards “a key part” of its approach to accountability. And leading up to the Paris AI Action Summit in 2025, OpenAI said system cards provide valuable insights into a model’s risks.

“System cards are the AI industry’s main tool for transparency and for describing what safety testing was done,” Adler told TechCrunch in an email. “Today’s transparency norms and commitments are ultimately voluntary, so it is up to each AI company to decide whether or when to release a system card for a given model.”

GPT-4.1 is shipping without a system card at a time when current and former employees are raising concerns over OpenAI’s safety practices. Last week, Adler and 11 other ex-OpenAI employees filed a proposed amicus brief in Elon Musk’s case against OpenAI, arguing that a for-profit OpenAI might cut corners on safety work. The Financial Times recently reported that the ChatGPT maker, spurred by competitive pressures, has slashed the amount of time and resources it allocates to safety testers.

While the most capable model in the GPT-4.1 family, GPT-4.1, isn’t the highest-performing in OpenAI’s roster, it does make substantial gains in the efficiency and latency departments. Thomas Woodside, co-founder and policy analyst at Secure AI Project, told TechCrunch that the performance improvements make a safety report all the more critical. The more sophisticated the model, the higher the risk it could pose, he said.

Many AI labs have batted down efforts to codify safety reporting requirements into law. For example, OpenAI opposed California’s SB 1047, which would have required many AI developers to audit and publish safety evaluations on models that they make public.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

OpenAI GPT-4.1 安全报告 AI安全 透明度
相关文章