Palo Alto Networks Blog 2024年10月30日
Securing AI Infrastructure for a More Resilient Future
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了全球政策制定者对人工智能监管的重视,以及保障AI系统安全的重要性。介绍了AI安全研究所的工作,包括对GenAI系统的网络安全研究,还提到了GenAI网络安全框架的五个核心方面,强调了保障AI供应链、防范对抗性攻击、建立事件检测与响应策略、设计安全的AI系统以及AI安全与道德模型使用的关系等内容。

🧐AI安全研究所的早期工作集中在强大的大型语言模型和生成式AI系统的网络安全上,这些模型正广泛应用于各经济领域,政策制定者需了解其面临的独特风险及缓解策略。

🚧保障AI供应链安全至关重要,企业需了解其AI供应链的全貌,包括软件、硬件和数据,采取零信任网络架构等策略来降低风险,还应定期更新依赖关系映射等。

🛡️对抗GenAI系统的对抗性攻击可操纵输入数据,导致模型错误预测,可通过对抗训练等防御措施提高模型韧性,同时建议采用数据加密等安全措施。

🔍建立强大的AI系统威胁检测和事件响应策略十分重要,系统应具备可恢复性,通过监控模型行为发现安全漏洞,借助强大的系统日志进行跟踪分析。

🎯设计安全的AI系统是AI安全的基础,应鼓励组织发现、分类和管理AI应用,在运行时持续监控和保护AI应用,保障AI开发供应链的安全。

As policymakers across the globe approach regulating artificial intelligence (AI), there is an emerging and welcomed discussion around the importance of securing AI systems themselves. Indeed, many of the same governments that are actively developing broad, risk-based, AI regulatory frameworks have concurrently established AI safety institutes to conduct research and facilitate a technical approach to increasing AI system resilience.

Much of the early work of these AI safety institutes has understandably focused on the cybersecurity of the most powerful large language models (LLMs) and generative AI systems, collectively referred to here as GenAI. These models are increasingly being integrated into applications and networks across every sector of the economy. This is why it’s important for policymakers to understand the unique risks facing the GenAI ecosystem and the mitigation strategies needed to bolster GenAI security as they are adopted.

Over the past few years, Palo Alto Networks has been on the front lines, working to understand these threats and developing security approaches and capabilities to mitigate them. A key pillar of this work has been the development of a GenAI cybersecurity framework, comprising five core security aspects. Each outlines the challenges and attack vectors across the different stages of GenAI security. (See figure below.)

Central to our GenAI cybersecurity framework is the need to address the full lifecycle of secure and responsible GenAI development and use. This entails understanding the threats to these systems, developing tactics to detect incidents and compromises, and implementing capabilities to secure the AI lifecycle by design.

Threats to AI Systems

It’s important for enterprises to have visibility into their full AI supply chain (encompassing the software, hardware and data that underpin AI models) as each of these components introduce potential risks. A supply chain attack, targeting a third-party code library, could potentially impact a wide range of downstream entities. To mitigate these risks, companies should consider adopting a Zero Trust network architecture that enables continuous validation of all elements within the AI system. Regularly updating dependency mapping, monitoring the integrity of AI models, and securing cloud environments where AI systems are hosted are also key strategies in securing the AI supply chain.

Adversarial attacks on GenAI systems can also manipulate input data in a way that results in AI models subsequently making incorrect predictions or classifications. For example, a slightly modified image file could cause an AI model to misidentify an object, with potentially serious impacts in use cases, like autonomous driving. To protect against these unintended outcomes, robust defenses, like adversarial training where models are trained using both clean and adversarial threat signatures, can be deployed to help improve resilience. Data encryption, secure transmission protocols and continuous monitoring for unusual patterns in AI system behavior are also recommended safeguards.

Incident Detection and Response

The importance of establishing a robust threat detection and incident response strategy for AI systems cannot be overstated. AI systems need to be designed with recoverability in mind, ensuring that compromised models can be quickly isolated and replaced with trusted backups to minimize disruption.

Since AI systems are often more dynamic than legacy IT environments, making them susceptible to unique threats, it’s important to monitor model behavior for signs of compromise or tampering. This monitoring can be assisted with robust AI system logging, which helps track and analyze anomalies that may indicate security breaches.

Secure AI by Design

The concept of securing AI systems by design is Foundational to AI security. This approach shifts the focus from retroactive security measures to proactive and intentional architecture that incorporates security into every stage of AI development and deployment. To that end, any framework for securing AI systems should encourage organizations to:

AI Security Complements Ethical Model Use Imperatives

As AI systems often process large amounts of personal and sensitive data, ensuring privacy becomes a significant concern. Fortunately, there are techniques, such as differential privacy, that allows AI systems to learn from data without revealing personal information, that can advance both privacy protection and data security goals. In a similar vein, by applying noise to datasets, companies can ensure that individual user data remains anonymous while still allowing for meaningful insights to be extracted by the GenAI model.

Looking Forward

As AI systems, both GenAI and more traditional machine learning or inline learning models, continue to evolve, so too will the threats they face. Recognizing this backdrop, any regulatory or policy framework for AI must ensure that security remains a continuous priority throughout the lifecycle of AI systems. This will help foster better collaboration between government officials, AI developers and the cybersecurity communities to stay ahead of emerging threats.

Acknowledgments: Thanks to the outstanding researchers, engineers and technical drafting team that developed the original six-blog series on the Palo Alto Networks GenAI Security Framework, including Royce Lu, Bo Qu, Yu Fu, Yiheng An, Haozhe Zhang, Qi Deng, Brody Kutt, Nicole Nichols, Katie Strand and Aryn Pedowitz. That series is available on Palo Alto Networks LIVEcommunity blog page.

The post Securing AI Infrastructure for a More Resilient Future appeared first on Palo Alto Networks Blog.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能安全 GenAI系统 AI供应链 事件检测响应 安全设计
相关文章