Unite.AI 05月17日 02:42
The State of AI Security in 2025: Key Insights from the Cisco Report
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

思科报告揭示,AI普及加速,但安全防护滞后,仅13%的企业有充分准备。AI引入了传统安全方法难以应对的新型威胁,如基础设施攻击、供应链风险和AI特有攻击。攻击者利用jailbreaking、间接prompt注入、训练数据提取和投毒等手段,在AI生命周期的各个阶段发起攻击。研究发现,即使是顶级AI模型也容易被攻破,微调可能削弱安全措施。AI还被网络罪犯利用,制造更有效的钓鱼和社工攻击。报告建议企业在AI生命周期各阶段管理风险,采用传统安全实践,关注薄弱环节,并加强员工培训。

⚠️AI安全威胁日益增长:报告指出,72%的企业已采用AI,但仅13%的企业有充分的安全准备。AI引入了传统网络安全方法难以应对的新型威胁,安全问题是企业更广泛使用AI的主要障碍。

🚨AI攻击向量不断演变:攻击者利用jailbreaking技术绕过AI模型的安全措施,通过间接prompt注入操纵AI的输入数据或上下文,还通过训练数据提取和投毒窃取敏感信息或破坏模型行为。研究表明,针对DeepSeek R1和Llama 2等高级模型的攻击成功率高达100%。

🔬思科AI安全研究的重要发现:研究人员发现,即使是顶级的AI模型也容易被算法破解,微调会削弱内部安全措施,使模型更容易受到jailbreaking攻击并产生有害内容。此外,研究还揭示了训练数据提取和数据投毒的风险,只需少量成本即可对大型数据集进行投毒,从而显著改变模型行为。

🛡️保障AI安全的最佳实践:思科建议企业在AI生命周期的每个阶段管理风险,包括数据采购、模型训练、部署和监控。此外,企业应采用传统的网络安全实践,如访问控制、权限管理和数据丢失防护,并关注供应链和第三方AI应用等薄弱环节,同时加强员工培训,提高AI安全意识。

As more businesses adopt AI, understanding its security risks has become more important than ever. AI is reshaping industries and workflows, but it also introduces new security challenges that organizations must address. Protecting AI systems is essential to maintain trust, safeguard privacy, and ensure smooth business operations. This article summarizes the key insights from Cisco’s recent “State of AI Security in 2025” report. It offers an overview of where AI security stands today and what companies should consider for the future.

A Growing Security Threat to AI

If 2024 taught us anything, it’s that AI adoption is moving faster than many organizations can secure it. Cisco’s report states that about 72% of organizations now use AI in their business functions, yet only 13% feel fully ready to maximize its potential safely. This gap between adoption and readiness is largely driven by security concerns, which remain the main barrier to wider enterprise AI use. What makes this situation even more concerning is that AI introduces new types of threats that traditional cybersecurity methods are not fully equipped to handle. Unlike conventional cybersecurity, which often protects fixed systems, AI brings dynamic and adaptive threats that are harder to predict. The report highlights several emerging threats organizations should be aware of:

Attack Vectors Targeting AI Systems

The report highlights the emergence of attack vectors that malicious actors use to exploit weaknesses in AI systems. These attacks can occur at various stages of the AI lifecycle from data collection and model training to deployment and inference. The goal is often to make the AI behave in unintended ways, leak private data, or carry out harmful actions.

Over recent years, these attack methods have become more advanced and harder to detect. The report highlights several types of attack vectors:

The report highlights serious concerns about the current state of these attacks, with researchers achieving a 100% success rate against advanced models like DeepSeek R1 and Llama 2. This reveals critical security vulnerabilities and potential risks associated with their use. Additionally, the report identifies the emergence of new threats like voice-based jailbreaks which are specifically designed to target multimodal AI models.

Findings from Cisco’s AI Security Research

Cisco's research team has evaluated various aspects of AI security and revealed several key findings:

The Role of AI in Cybercrime

AI is not just a target – it is also becoming a tool for cybercriminals. The report notes that automation and AI-driven social engineering have made attacks more effective and harder to spot. From phishing scams to voice cloning, AI helps criminals create convincing and personalized attacks. The report also identifies the rise of malicious AI tools like “DarkGPT,” designed specifically to help cybercrime by generating phishing emails or exploiting vulnerabilities. What makes these tools especially concerning is their accessibility. Even low-skilled criminals can now create highly personalized attacks that evade traditional defenses.

Best Practices for Securing AI

Given the volatile nature of AI security, Cisco recommends several practical steps for organizations:

    Manage Risk Across the AI Lifecycle: It is crucial to identify and reduce risks at every stage of AI lifecycle from data sourcing and model training to deployment and monitoring. This also includes securing third-party components, applying strong guardrails, and tightly controlling access points.Use Established Cybersecurity Practices: While AI is unique, traditional cybersecurity best practices are still essential. Techniques like access control, permission management, and data loss prevention can play a vital role.Focus on Vulnerable Areas: Organizations should focus on areas that are most likely to be targeted, such as supply chains and third-party AI applications. By understanding where the vulnerabilities lie, businesses can implement more targeted defenses.Educate and Train Employees: As AI tools become widespread, it’s important to train users on responsible AI use and risk awareness. A well-informed workforce helps reduce accidental data exposure and misuse.

Looking Ahead

AI adoption will keep growing, and with it, security risks will evolve. Governments and organizations worldwide are recognizing these challenges and starting to build policies and regulations to guide AI safety. As Cisco's report highlights, the balance between AI safety and progress will define the next era of AI development and deployment. Organizations that prioritize security alongside innovation will be best equipped to handle the challenges and grab emerging opportunities.

The post The State of AI Security in 2025: Key Insights from the Cisco Report appeared first on Unite.AI.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 网络安全 AI风险 思科报告
相关文章