IT之家 07月30日 17:41
《中国人工智能安全承诺框架》发布
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

2025世界人工智能大会聚焦AI安全与发展,中国人工智能发展与安全研究网络发布《中国人工智能安全承诺框架》。该框架在原有基础上新增国际合作和前沿风险防范内容,体现了中国产业界推动AI向善发展的决心。框架涵盖安全团队建设、模型测试、数据安全、基础设施安全、模型透明度、前沿研究以及国际合作等七个方面,旨在通过行业自律,以高水平安全保障高质量发展,促进人工智能技术以人为本、智能向善,并积极贡献中国智慧和力量于全球AI治理。

🌐 **强化组织与机制建设**:承诺框架要求各机构设立专业安全团队或组织架构,明确安全负责人,并主动设定安全风险基线。通过贯穿AI开发部署全生命周期的风险管理,明确风险识别和应对流程,以构建完善的安全风险管理机制,确保AI开发的规范性。

🛡️ **提升模型安全与可靠性**:强调对人工智能模型进行严格的安全测试,特别是大模型。通过专业仿真测试和红队测试,重点评估模型在通用理解、推理决策能力以及在工业、教育、医疗、金融、法律等关键领域的表现,确保其安全性和可靠性。

🔒 **保障数据安全与隐私**:要求制定并实施数据安全防护制度及技术措施,有效识别和处置数据投毒。同时,对业务数据进行加密存储和访问控制,严防商业秘密、用户隐私及知识库被AI模型非法输出,切实保障数据安全与隐私权益。

💻 **加强基础设施安全防护**:提出建立AI系统部署软硬件的安全监测与防护能力,实施定期动态渗透测试,识别并报告安全隐患。同时,建立完善的基础设施安全应急响应机制,包括应急处理流程、责任分配及事后改进,以应对潜在的安全风险。

💡 **增强模型透明度与信息披露**:鼓励主动披露安全治理实践举措,提升对各利益攸关方的透明度。公开披露模型的功能、适用领域及局限性,并通过模型说明、服务协议等方式,向公众清晰传达可能存在的风险,建立信任。

🔬 **深化前沿安全研究与风险防范**:鼓励积极开展前沿安全研究,开发和部署智能向善的AI系统,并公开研究成果以应对社会挑战。加强对AI系统在前沿领域的滥用风险研判,防范其在高危场景下的潜在滥用,确保技术发展不偏离轨道。

🤝 **推动国际合作与技能普及**:承诺框架强调积极参与全球AI安全治理交流,共享风险防控经验。同时,承担社会责任,加强科普宣传和技能培训,提升全民AI素养,助力弥合智能鸿沟,促进技术向善的普惠应用。

IT之家 7 月 30 日消息,2025 世界人工智能大会暨人工智能全球治理高级别会议“人工智能发展与安全”全体会议 7 月 26 日下午在上海召开。会议由中国人工智能发展与安全研究网络(以下简称“研究网络”,CnAISDA)主办。上海市委常委、常务副市长吴伟,国家发展和改革委员会创新驱动发展中心主任霍福鹏出席并致辞。

杰弗里・辛顿、姚期智、约书亚・本吉奥和大卫・帕特森 4 位图灵奖得主,以及 20 多位国内外顶尖专家出席会议,共同探讨人工智能安全发展、缩小智能鸿沟等前沿议题,积极寻求人工智能安全治理国际合作路径。

中国信息通信研究院(简称“中国信通院”)院长、中国人工智能产业发展联盟(AIIA)秘书长余晓晖受邀参与对话,牵头与清华大学、上海人工智能实验室、中国电子信息产业发展研究院等单位的代表一起发布《中国人工智能安全承诺框架》。

该《框架》在 AIIA《人工智能安全承诺》(2024 年 12 月发布)的基础上,新增了加强人工智能安全治理国际合作、防范前沿人工智能安全风险等内容,体现了中国产业界愿与全球各方紧密携手,共促人工智能向善发展的坚定决心和开放态度。

下一步,中国信通院作为“研究网络”成员和 AIIA 秘书处单位,将与签署企业携手,通过披露行动、测试验证等方式,推动《框架》的落地实践,促进我国人工智能朝着有益、安全、公平方向健康有序发展,并积极开展国际治理合作,为全球人工智能安全治理贡献中国智慧和中国力量。

IT之家附《框架》中英文全文:

中国人工智能安全承诺框架

CHINA ARTIFICIAL INTELLIGENCE SECURITY AND SAFETY COMMITMENTS FRAMEWORK

人工智能浪潮席卷全球,积极释放技术价值红利,对全球经济社会发展和人类文明进步产生深远影响。我们也清晰认知到,人工智能带来难以预知的各种风险挑战。为把握新一轮发展机遇,中国人工智能发展与安全研究网络成员郑重发起《中国人工智能安全承诺框架》,通过产业自律,以高水平安全保障高质量发展,协力共促人工智能稳健发展。此事由中国信息通信研究院牵头推进。我们深知,自律承诺是获得社会信任的关键要素,我们将以本承诺作为行动守则,接受社会各界监督,不断提升优化,促进人工智能技术应用以人为本,智能向善。

The wave of artificial intelligence (AI) is sweeping across the globe, actively generating technological dividends and exerting profound influence on global economic and social development as well as the progress of human civilization. At the same time, we are keenly aware that AI brings about unpredictable risks and complex challenges. To seize this new round of development opportunities, members of China AI Safety and Development Association (CnAISDA) solemnly launch the AI Security and Safety Commitments. Through industry self-regulation, we will leverage high-level security and safety measures to support high-quality development, and collaborate to promote the robust development of AI. This initiative is led and promoted by the China Academy of Information and Communications Technology (CAICT). We fully recognize that commitments to self-discipline constitute a critical foundation for gaining the trust of the international community. Guided by the Commitments as our code of conduct, and subject to the oversight of all stakeholders, we will continuously improve and refine our approach. By doing so, we will ensure that the application of AI technologies always remains people-centered and aligned with the principle of AI for good.

承诺一:设置安全团队或组织架构,构建安全风险管理机制。内部设有专业团队负责开展人工智能风险评估、安全治理等工作,明确安全负责人。主动设定符合实际需求的安全风险基线,开源时采取相应的安全措施,开展贯穿人工智能开发部署全生命周期的风险管理,明确风险识别和应对流程及措施。

Commitment I: Establish security and safety teams or organizational structures and build security and safety risk management mechanisms. Designate a leader responsible for AI security and safety, establish specialized teams to conduct AI risk assessments and safety, security and governance within the enterprise. Proactively define realistic security and safety risk baselines, adopt appropriate security and safety measures for open-source initiatives, and implement risk management practices throughout the entire AI development and deployment life cycle. Clearly outline processes and measures for risk identification and mitigation.

承诺二:开展模型安全测试,提升模型效果与安全可靠性。通过专业性的仿真测试团队,在发布、更新人工智能模型之前对其进行红队测试。对于大模型,重点围绕其通用理解、推理和决策能力,以及其在工业、教育、医疗、金融、法律等场景下表现出的能力开展安全性和可靠性测试。

Commitment II: Conduct security and safety testing for AI models to enhance the performance, safety and reliability. Through dedicated simulation and red-teaming experts, rigorously test AI models prior to their release or update. For large models in particular, prioritize safety and reliability evaluations focusing on their general understanding, reasoning, and decision-making capabilities, as well as their performance in critical domains such as industry, education, healthcare, finance, and law.

承诺三:采取措施保障训练数据和业务数据安全。制定数据安全防护制度,配套建立防护技术措施,发现并及时处置数据投毒的情况,把控训练数据的准确性与可靠性。对业务数据进行加密存储与访问控制,确保商业秘密、用户隐私及用户上传的知识库仅在授权下访问,不被人工智能模型非法输出,保障数据安全与隐私权益。

Commitment III: Implement measures to safeguard the security of training data and operational data. Establish data security protection policies and deploy corresponding technical measures to detect and promptly address data poisoning incidents, ensuring the accuracy and reliability of training data. Encrypt operational data and enforce access controls to protect business secrets, user privacy, and user-uploaded knowledge base, ensuring access is restricted to authorized use only. Prevent unauthorized outputs by AI models, thereby safeguarding data security and privacy rights.

承诺四:提升基础设施安全。建立人工智能系统部署的软硬件安全监测和防护能力,实施定期和动态的安全渗透测试,模拟各种潜在的风险场景,识别并报告环境中的安全隐患,研判可能导致的各种风险。建立基础设施安全应急响应机制,包括应急处理流程、责任分配以及事后改进方案。

Commitment IV: Enhance infrastructure security. Develop robust capabilities for monitoring and protecting the software and hardware used in AI system deployments. Conduct regular and dynamic security penetration tests to simulate potential risk scenarios, identify and report security vulnerabilities in the infrastructure, and assess associated risks. Establish an infrastructure security incident response mechanism, including emergency response procedures, clear accountability assignments, and post-incident improvement solutions.

承诺五:增强模型透明度。主动披露安全治理实践举措,提升对各利益攸关方的透明度。公开披露模型的功能、适用领域以及局限性。通过模型说明、服务协议等方式,向公众披露可能涵盖的风险。

Commitment V: Enhance model transparency. Proactively disclose safety and security governance measures and improve transparency for all stakeholders. Provide clear information about the model's capabilities, applicable fields, and limitations. Inform potential risks to the public through model documentation, service agreements, or others.

承诺六:积极开展前沿安全研究,防范前沿领域安全风险。研究开发和部署智能向善的人工智能系统,积极向公众披露研究成果,以帮助应对社会面临的挑战。加强对人工智能系统在前沿领域中的滥用风险研判,防范其在高危场景的潜在滥用风险。

Commitment VI: Vigorously advance frontier safety and security research, and prevent safety and security risks in frontier fields. Innovate in the development and deployment of AI systems that embody the principle of AI for good, and disclose research findings with the public transparently, contributing to addressing pressing challenges faced by society. Strengthen the assessment of risks related to the abuse of AI systems in frontier fields, and prevent potential risks of their abuse in high-risk scenarios.

承诺七:加强安全治理国际合作,推动技术向善普惠应用。积极参与全球人工智能安全治理交流对话,共享风险识别、评估与防控经验及最佳实践。积极承担社会责任,加强科普宣传、开展技能培训,提升人工智能素养和技能水平,助力弥合智能鸿沟。

Commitment VII: Strengthen international cooperation on AI safety, security and governance, and promote inclusive, beneficial applications of AI. Actively participate in global dialogues on AI safety, security and governance, and contribute to the exchange of experiences and best practices in risk identification, assessment, and mitigation. Fulfill social responsibilities by advancing public science communication, enhancing AI education, and providing skills training to improve AI literacy and capabilities, with a focus on bridging the global intelligence divide.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 AI安全 AI治理 技术发展 国际合作
相关文章