The Verge - Artificial Intelligences 2024年08月09日
Democrats push Sam Altman on OpenAI’s safety record
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

美国参议员伊丽莎白·沃伦和众议员洛里·特拉汉要求 OpenAI 详细说明其内部举报机制和安全审查流程,此前有前员工抱怨内部批评经常被压制。两位议员在致 OpenAI 首席执行官山姆·奥特曼的信中,引用了多个案例,表明 OpenAI 的安全程序存在问题,并要求奥特曼在 8 月 22 日前提供相关信息。

🤔 **OpenAI 员工举报安全问题被忽视**: 沃伦和特拉汉在信中指出,多名前员工指控 OpenAI 压制内部批评,并对安全问题的处理方式提出质疑。例如,他们提到在 2022 年,OpenAI 在未获得其安全委员会批准的情况下,就在印度的微软必应搜索引擎中测试了 GPT-4 的未发布版本。他们还提到了 2023 年奥特曼被公司短暂解雇的事件,原因之一是董事会担心 OpenAI 在充分了解其技术后果之前就将技术商业化。

🧐 **OpenAI 寻求解决安全担忧**: OpenAI 似乎正在采取措施解决安全担忧。今年 7 月,该公司宣布与洛斯阿拉莫斯国家实验室合作,探索如何安全地利用先进的 AI 模型来帮助生物科学研究。上周,奥特曼在 X 上宣布,OpenAI 正在与美国 AI 安全研究所合作,并强调该公司将投入 20% 的计算资源用于安全研究。他还表示,OpenAI 已取消员工的非贬损条款,并允许员工撤销已授予的股权。

🚨 **OpenAI 面临监管压力**: 沃伦和特拉汉的信件表明,议员们越来越关注 AI 安全问题,并可能采取行动监管 AI 公司。他们要求奥特曼提供有关 OpenAI 内部安全热线的使用情况、安全审查流程以及产品绕过安全审查的情况。他们还要求了解 OpenAI 的利益冲突政策,以及奥特曼是否需要剥离任何外部资产。

⚠️ **OpenAI 安全问题引发担忧**: OpenAI 近期面临的安全问题引发了广泛关注,包括 OpenAI 内部安全程序是否有效、公司是否将利润置于安全之上、以及政府是否应该介入监管 AI 公司等问题。沃伦和特拉汉的信件表明,议员们正在认真关注 OpenAI 的行为,并可能采取进一步行动来保护公众利益。

📊 **AI 安全问题亟待解决**: AI 技术的快速发展带来了巨大的机遇和挑战。确保 AI 的安全和负责任使用至关重要,需要政府、企业和研究机构共同努力。OpenAI 面临的挑战反映了 AI 安全问题在全球范围内的重要性,需要各方共同努力,制定有效的监管框架和安全标准,确保 AI 技术的健康发展。

Sen. Elizabeth Warren (D-MA) and Rep. Lori Trahan (D-MA) are calling for answers on how OpenAI handles whistleblowers and safety reviews after former employees complained that internal criticism is often stifled.

“Given the discrepancy between your public comments and reports of OpenAI’s actions, we request information about OpenAI’s whistleblower and conflict of interest protections in order to understand whether federal intervention may be necessary,” Warren and Trahan wrote in a letter exclusively shared with The Verge.

The lawmakers cited several instances where OpenAI’s safety procedures have been called into question. For example, they said, in 2022, an unreleased version of GPT-4 was being tested in a new version of the Microsoft Bing search engine in India before receiving approval from OpenAI’s safety board. They also recalled Altman’s brief ousting from the company in 2023, as a result of the board’s concerns, in part, “over commercializing advances before understanding the consequences.”

Warren and Trahan’s letter to Altman comes as the company is plagued by a laundry list of safety concerns, which often are at odds with the company’s public statements. For instance, an anonymous source told The Washington Post that OpenAI rushed through safety tests, the Superalignment team (who were partly responsible for safety) was dissolved, and a safety executive quit claiming that “safety culture and processes have taken a backseat to shiny products.” Lindsey Held, a spokesperson for OpenAI, denied the claims in The Washington Post’s report, and that the company “didn’t cut corners on our safety process, though we recognize the launch was stressful for our teams.”

Other lawmakers have also sought answers on the company’s safety practices, including a group of senators led by Brian Schatz (D-HI) in July. Warren and Trahan asked for further clarity on OpenAI’s responses to that group, including on its creation of a new “Integrity Line” for employees to report concerns.

Meanwhile, OpenAI appears to be on the offensive. In July, the company announced a partnership with Los Alamos National Laboratory to explore how advanced AI models can safely aid in bioscientific research. Just last week, Altman announced via X that OpenAI is collaborating with the US AI Safety Institute and emphasized that 20 percent of computing resources at the company will be dedicated to safety (a promise originally made to the now-defunct Superalignment team.) In the same post, Altman also said that OpenAI has removed non-disparagement clauses for employees and provisions allowing the cancellation of vested equity, a key issue in Warren’s letter.

The letter signals a key policy interest for the lawmakers, who previously introduced bills to expand protections for whistleblowers, like the FTC Whistleblower Act and the SEC Whistleblower Reform Act. It could also serve as a signal to law enforcement agencies, who so far have reportedly set their sights on possible antitrust violations and harmful data practices by OpenAI.

Warren and Trahan asked Altman to provide information on how its new AI safety hotline for employees was being used, and how the company follows up on reports. They also asked for “a detailed accounting” of all the times that OpenAI products have “bypassed safety protocols,” and in what circumstances a product would be allowed to skip over a safety review. The lawmakers are also seeking information on OpenAI’s conflicts policy. They asked Altman whether he’s been required to divest from any outside holdings, and “what specific protections are in place to protect OpenAI from your financial conflicts of interest?” They asked Altman to respond by August 22nd.

Warren also notes how vocal Altman has been about his concerns regarding AI. Last year, in front of the Senate, Altman warned that AI’s capabilities could be “significantly destabilizing for public safety and national security” and emphasized the impossibility of anticipating every potential abuse or failure of the technology. These warnings seemed to resonate with lawmakers—in OpenAI’s home state of California, Senator Scott Wiener is pushing for a bill to regulate large language models, including restrictions that would hold companies legally accountable if their AI is used in harmful ways.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

OpenAI AI安全 监管 举报机制
相关文章