TechCrunch News 04月14日 05:08
Access to future AI models in OpenAI’s API may require a verified ID
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

OpenAI 近期在其官网上发布了关于“验证组织”的身份验证流程,旨在加强对其高级 AI 模型的安全控制。该流程要求开发者提供政府颁发的身份证明,以解锁对 OpenAI 平台最先进模型和功能的访问权限。此举旨在应对滥用 OpenAI API 的行为,并为即将发布的新模型做好准备。同时,此举也可能出于安全考量,以防止知识产权盗窃,并应对来自如朝鲜等地的恶意使用行为。

🛡️ OpenAI 推出了“验证组织”流程,作为开发者访问其高级模型和功能的全新方式。该流程要求开发者提供政府颁发的身份证明。

🔑 验证流程旨在确保 AI 技术的安全使用。 OpenAI 声明,此举是为了应对少数开发者违反使用政策的行为,并为即将发布的新模型做好准备。

🚨 此举可能与 OpenAI 加强对其产品安全性的努力有关。 OpenAI 曾发布报告,致力于检测和减轻对其模型的恶意使用,包括防止知识产权盗窃和应对来自特定地区的潜在威胁。

🌎 OpenAI 去年夏天已禁止在中国提供服务,并曾调查 DeepSeek(一家中国 AI 实验室)可能通过其 API 窃取数据,用于模型训练的行为,这违反了 OpenAI 的服务条款。

OpenAI may soon require organizations to complete an ID verification process in order to access certain future AI models, according to a support page published to the company’s website last week.

The verification process, called Verified Organization, is “a new way for developers to unlock access to the most advanced models and capabilities on the OpenAI platform,” reads the page. Verification requires a government-issued ID from one of the countries supported by OpenAI’s API. An ID can only verify one organization every 90 days, and not all organizations will be eligible for verification, says OpenAI.

“At OpenAI, we take our responsibility seriously to ensure that AI is both broadly accessible and used safely,” reads the page. “Unfortunately, a small minority of developers intentionally use the OpenAI APIs in violation of our usage policies. We’re adding the verification process to mitigate unsafe use of AI while continuing to make advanced models available to the broader developer community.”

The new verification process could be intended to beef up security around OpenAI’s products as they become more sophisticated and capable. The company has published several reports on its efforts to detect and mitigate malicious use of its models, including by groups allegedly based in North Korea.

It may also be aimed at preventing IP theft. According to a report from Bloomberg earlier this year, OpenAI was investigating whether a group linked with DeepSeek, the China-based AI lab, exfiltrated large amounts of data through its API in late 2024, possibly for training models — a violation of OpenAI’s terms.

OpenAI blocked access to its services in China last summer.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

OpenAI 身份验证 AI安全 模型访问
相关文章