Unite.AI 06月05日 02:07
How to Address the Network Security Challenges Related to Agentic AI
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了Agentic AI(自主智能AI)带来的网络安全挑战,并提供了相应的应对策略。Agentic AI能够自主解决复杂问题,极大地提升了生产力和运营效率。然而,其特性也带来了安全隐患,例如数据泄露、恶意攻击等。文章详细分析了Agentic AI的四个基本操作步骤(感知与数据收集、决策、行动与执行、学习与适应)中可能存在的安全风险,并提出了包括加密连接、云防火墙、可观察性与可追溯性、出口安全等在内的综合解决方案,以帮助企业安全、负责任地利用Agentic AI。

🧐 Agentic AI通过结合大型语言模型、机器学习和自然语言处理,能够自主完成任务,极大地提升生产力和运营效率,例如在银行业务中自动完成交易。

📡 Agentic AI的运作涉及四个关键步骤:感知与数据收集、决策、行动与执行以及学习与适应。这些步骤依赖于对大量数据的访问,但同时也增加了网络安全风险,特别是对于敏感信息的保护。

🛡️ Agentic AI带来了多重安全挑战,包括数据泄露、恶意攻击、以及对网络基础设施的潜在威胁。由于AI代理需要访问大量数据,这使得网络更容易受到攻击,并可能导致数据被未授权用户访问。

💡 为了应对Agentic AI带来的安全挑战,企业应采取综合措施,包括使用加密连接保护数据收集过程、部署云防火墙确保AI代理访问正确模型、加强对AI代理行为的可观察性和可追溯性、以及实施出口安全措施防止数据泄露和恶意控制。

Agentic artificial intelligence (AI) represents the next frontier of AI, promising to go beyond even the capabilities of generative AI (GenAI). Unlike most GenAI systems, which rely on human prompts or oversight, agentic AI is proactive because it doesn’t require user input to solve complex, multi-step problems. By leveraging a digital ecosystem of large language models (LLM), machine learning (ML) and natural language processing (NLP), agentic AI performs tasks autonomously on behalf of a human or system, massively improving productivity and operations.

While agentic AI is still in its early stages, experts have highlighted some ground-breaking use cases. Consider a customer service environment for a bank where an AI agent does more than purely answer a user’s questions when asked. Instead, the agent will actually complete transactions or tasks like moving funds when prompted by the user. Another example could be in a financial setting where agentic AI systems assist human analysts by autonomously and quickly analyzing large amounts of data to generate audit-ready reports for data-informed decision-making.

The incredible possibilities of agentic AI are undeniable. However, like any new technology, there are often security, governance, and compliance concerns. The unique nature of these AI agents presents several security and governance challenges for organizations. Enterprises must address these challenges to not only reap the rewards of agentic AI but also ensure network security and efficiency.

What Network Security Challenges Does Agentic AI Create for Organizations?

AI agents have four basic operations. The first is perception and data collection. These hundreds, thousands, and maybe millions of agents gather and collect data from multiple places, whether the cloud, on-premises, the edge, etc., and this data could physically be from anywhere, rather than one specific geographic location. The second step is decision-making. Once these agents have collected data, they use AI and ML models to make decisions. The third step is action and execution. Having decided, these agents act accordingly to carry out that decision. The last step is learning, where these agents use the data gathered before and after their decision to tweak and adapt correspondingly.

In this process, agentic AI requires access to enormous datasets to function effectively. Agents will typically integrate with data systems that handle or store sensitive information, such as financial records, healthcare databases, and other personally identifiable information (PII). Unfortunately, agentic AI complicates efforts to secure network infrastructure against vulnerabilities, particularly with cross-cloud connectivity. It also presents egress security challenges, making it difficult for businesses to guard against exfiltration, as well as command and control breaches. Should an AI agent become compromised, sensitive data could easily be leaked or stolen. Likewise, agents could be hijacked by malicious actors and used to generate and distribute disinformation at scale. When breaches occur, not only are there financial penalties, but also reputational consequences.

Key capabilities like observability and traceability can get frustrated by agentic AI as it is difficult to track which datasets AI agents are accessing, increasing the risk of data being exposed or accessed by unauthorized users. Similarly, agentic AI’s dynamic learning and adaptation can impede traditional security audits, which rely on structured logs to track data flow. Agentic AI is also ephemeral, dynamic, and continually running, creating a 24/7 need to maintain optimum visibility and security. Scale is another challenge. The attack surface has grown exponentially, extending beyond the on-premises data center and the cloud to include the edge. In fact, depending on the organization, agentic AI can add thousands to millions of new endpoints at the edge. These agents operate in numerous locations, whether different clouds, on-premises, the edge, etc., making the network more vulnerable to attack.

A Comprehensive Approach to Addressing Agentic AI Security Challenges

Organizations can address the security challenges of agentic AI by applying security solutions and best practices at each of the four basic operational steps:

    Perception and Data Collection: Businesses need high bandwidth network connectivity that is end-to-end encrypted to enable their agents to collect the enormous amount of data required to function. Recall that this data could be sensitive or highly valuable, depending on the use case. Companies should deploy a high-speed encrypted connectivity solution to run between all these data sources and protect sensitive and PII data.Decision Making: Companies must ensure their AI agents have access to the correct models and AI and ML infrastructure to make the right decisions. By implementing a cloud firewall, enterprises can obtain the connectivity and security their AI agents need to access the correct models in an auditable fashion.Action Execution: AI agents take action based on the decision. However, businesses must identify which agent out of the hundreds or thousands of them made that decision. They also need to know how their agents communicate with each other to avoid conflict or “robots fighting robots.” As such, organizations need observability and traceability of these actions taken by their AI agents. Observability is the ability to track, monitor, and understand internal states and behavior of AI agents in real-time. Traceability is the ability to track and document data, decisions, and actions made by an AI agent.Learning and Adaptation: Companies spend millions, if not hundreds of millions or more, to tune their algorithms, which increases the value and precision of these agents. If a bad actor gets hold of that model and exfiltrates it, all those resources could be in their hands in minutes. Businesses can protect their investments through egress security features that guard against exfiltration and command and control breaches.

Capitalizing on Agentic AI in a Secure and Responsible Manner

Agentic AI holds remarkable potential, empowering companies to reach new heights of productivity and efficiency. But, like any emerging technology in the AI space, organizations must take precautions to safeguard their networks and sensitive data. Security is especially crucial today considering highly sophisticated and well-organized malefactors funded by nation-states, like Salt Typhoon and Silk Typhoon, which continue to conduct large-scale attacks.

Organizations should partner with cloud security experts to develop a robust, scalable and future-ready security strategy capable of addressing the unique challenges of agentic AI. These partners can enable enterprises to track, manage, and secure their AI agent; moreover, they help provide companies with the awareness they need to satisfy the standards related to compliance and governance.

The post How to Address the Network Security Challenges Related to Agentic AI appeared first on Unite.AI.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Agentic AI 网络安全 数据安全 人工智能
相关文章