AWS Machine Learning Blog 2024年07月19日
How Mend.io unlocked hidden patterns in CVE data with Anthropic Claude on Amazon Bedrock
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Mend.io 使用 Anthropic Claude 在 Amazon Bedrock 上对 70,000 多个 CVE 进行分类,并识别包含特定攻击需求细节的 CVE。通过利用大型语言模型 (LLM) 的强大功能,Mend.io 简化了分析过程,节省了 200 天的人工专家工作时间,并提高了漏洞判定的质量,使他们能够更好地为客户优先排序漏洞,从而获得竞争优势。

🤔 **利用 Anthropic Claude 的优势:** Mend.io 选择 Anthropic Claude 是因为其在分析 CVE 描述方面表现出色,特别是在识别 XML 标签方面,这使得 Mend.io 可以结构化提示,提高分析的精确性和价值。此外,与 Amazon Bedrock 的无缝集成提供了安全可靠的平台,用于处理敏感数据。

📝 **精心设计的提示:** 为了确保 Anthropic Claude 的分析准确可靠,Mend.io 仔细设计了提示,提供丰富的上下文、示例以及对攻击复杂性和攻击需求的明确定义。使用 XML 标签隔离不同的部分,引导 Anthropic Claude 的注意力,提高响应的准确性。

🚧 **挑战与解决方案:** 在分析 70,000 个 CVE 的过程中,Mend.io 面临着管理配额限制、估算成本和处理意外模型响应的挑战。他们通过并行化模型请求和调整并行程度来管理 RPM 和 TPM 配额,并利用 Boto3 Python 库的内置重试逻辑来无缝处理偶发的节流情况。

📈 **结果与结论:** 这项举措不仅突出了人工智能在网络安全领域的变革潜力,而且为将 LLM 集成到现实世界应用程序中提供了宝贵的见解。

🚀 **未来的展望:** Mend.io 将继续探索 AI 在网络安全中的应用,旨在进一步提高 CVE 分析的效率和准确性,并为客户提供更强大的安全解决方案。

This post is co-written with Maciej Mensfeld from Mend.io.

In the ever-evolving landscape of cybersecurity, the ability to effectively analyze and categorize Common Vulnerabilities and Exposures (CVEs) is crucial. This post explores how Mend.io, a cybersecurity firm, used Anthropic Claude on Amazon Bedrock to classify and identify CVEs containing specific attack requirements details. By using the power of large language models (LLMs), Mend.io streamlined the analysis of over 70,000 vulnerabilities, automating a process that would have been nearly impossible to accomplish manually. With this capability, they manage to reduce 200 days of human experts’ work. This also allows them to provide higher quality of verdicts to their customers, allowing them to prioritize vulnerabilities better. It gives Mend.io a competitive advantage. This initiative not only underscores the transformative potential of AI in cybersecurity, but also provides valuable insights into the challenges and best practices for integrating LLMs into real-world applications.

The post delves into the challenges faced, such as managing quota limitations, estimating costs, and handling unexpected model responses. We also provide insights into the model selection process, results analysis, conclusions, recommendations, and Mend.io’s future outlook on integrating artificial intelligence (AI) in cybersecurity.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.

Mend.io is a cybersecurity company dedicated to safeguarding digital ecosystems through innovative solutions. With a deep commitment to using cutting-edge technologies, Mend.io has been at the forefront of integrating AI and machine learning (ML) capabilities into its operations. By continuously pushing the boundaries of what’s possible, Mend.io empowers organizations to stay ahead of evolving cyber threats and maintain a proactive, intelligent approach to security.

Uncovering attack requirements in CVE data

In the cybersecurity domain, the constant influx of CVEs presents a significant challenge. Each year, thousands of new vulnerabilities are reported, with descriptions varying in clarity, completeness, and structure. These reports, often contributed by a diverse global community, can be concise, ambiguous, or lack crucial details, burying critical information such as attack requirements, potential impact, and suggested mitigation steps. The unstructured nature of CVE reports poses a significant obstacle in extracting actionable insights. Automated systems struggle to accurately parse and comprehend the inconsistent and complex narratives, increasing the risk of overlooking or misinterpreting vital details—a scenario with severe implications for security postures.

For cybersecurity professionals, one of the most daunting tasks is identifying the attack requirements—the specific conditions and prerequisites needed for a vulnerability to be successfully exploited—from these vast and highly variable natural language descriptions. Determining whether attack requirements are present or absent is equally crucial, as this information is vital for assessing and mitigating potential risks. With tens of thousands of CVE reports to analyze, manually sifting through each description to extract this nuanced information is impractical and nearly impossible, given the sheer volume of data involved

The decision to use Anthropic Claude on Amazon Bedrock and the advantages it offered

In the face of this daunting challenge, the power of LLMs offered a promising solution. These advanced generative AI models are great at understanding and analyzing vast amounts of text, making them the perfect tool for sifting through the flood of CVE reports to pinpoint those containing attack requirement details.

The decision to use Anthropic Claude on Amazon Bedrock was a strategic one. During evaluations, Mend.io found that Although other LLMs like GPT-4 also showed strong performance in analyzing CVE descriptions, Mend.io’s specific requirements were better aligned with Anthropic Claude’s capabilities. Mend.io used tags like <example-attack-requirement>. When Mend.io evaluated other models with both structured and unstructured prompts, Anthropic Claude’s ability to precisely follow the structured prompts and include the expected tags made it a better fit for Mend.io’s use case during their testing.

Anthropic Claude’s unique capabilities, which allows the recognition of XML tags within prompts, gave it a distinct advantage. This capability enabled Mend.io to structure the prompts in a way that improved precision and value, ensuring that Anthropic Claude’s analysis was tailored to Mend.io’s specific needs. Furthermore, the seamless integration with Amazon Bedrock provided a robust and secure platform for handling sensitive data. The proven security infrastructure of AWS strengthens confidence, allowing Mend.io to process and analyze CVE information without compromising data privacy and security—a critical consideration in the world of cybersecurity.

Crafting the prompt

Crafting the perfect prompt for Anthropic Claude was both an art and a science. It required a deep understanding of the model’s capabilities and a thorough process to make sure Anthropic Claude’s analysis was precise and grounded in practical applications. They composed the prompt with rich context, provided examples, and clearly defined the differences between attack complexity and attack requirements as defined in the Common Vulnerability Scoring System (CVSS) v4.0. This level of detail was crucial to make sure Anthropic Claude could accurately identify the nuanced details within CVE descriptions.

The use of XML tags was a game-changer in structuring the prompt. These tags allowed them to isolate different sections, guiding Anthropic Claude’s focus and improving the accuracy of its responses. With this unique capability, Mend.io could direct the model’s attention to specific aspects of the CVE data, streamlining the analysis process and increasing the value of the insights derived.

With a well-crafted prompt and the power of XML tags, Mend.io equipped Anthropic Claude with the context and structure necessary to navigate the intricate world of CVE descriptions, enabling it to pinpoint the critical attack requirement details that would arm security teams with invaluable insights for prioritizing vulnerabilities and fortifying defenses.

The following example illustrates how to craft a prompt effectively using tags with the goal of identifying phishing emails:

<Instructions>        Analyze emails to identify potential spam or phishing threats. Users should provide the full email content, including headers, by copy-pasting or uploading the email file directly.</Instructions><AnalysisProcess>        <StepOne>            <Title>Analyze Sender Information</Title>            <Description>Verify the sender's email address and domain. Assess     additional contacts, date, and time to evaluate potential legitimacy and context</Description>        </StepOne>        <StepTwo>            <Title>Examine Email Content</Title>            <Description>Analyze the subject line and body content for relevance and legitimacy. Caution against quick offers. Evaluate personalization and sender legitimacy.</Description>        </StepTwo>        <StepThree>            <Title>Check for Unsolicited Attachments or Links</Title>            <Description>Identify and scrutinize hyperlinks for potential phishing or spam indicators. Advise on verifying link legitimacy without direct interaction. Use tools like VirusTotal or Google Safe Browsing for safety checks.</Description>        </StepThree></AnalysisProcess><Conclusion>        Based on the analysis, provide an estimation of the email's likelihood of being spam or phishing, expressed as a percentage to indicate the assessed risk level. This comprehensive analysis helps users make informed decisions about the email's authenticity while emphasizing security and privacy.</Conclusion><DataHandling>         Refer to uploaded documents as 'knowledge source'. Strictly adhere to facts provided, avoiding speculation. Prioritize documented information over baseline knowledge or external sources. If no answer is found within the documents, state this explicitly.</DataHandling>

The challenges

While using Anthropic Claude, Mend.io experienced the flexibility and scalability of the service firsthand. As the analysis workload grew to encompass 70,000 CVEs, they encountered opportunities to optimize their usage of the service’s features and cost management capabilities. When using the on-demand model deployment of Amazon Bedrock across AWS Regions, Mend.io proactively managed the API request per minute (RPM) and tokens per minute (TPM) quotas by parallelizing model requests and adjusting the degree of parallelization to operate within the quota limits. They also took advantage of the built-in retry logic in the Boto3 Python library to handle any occasional throttling scenarios seamlessly. For workloads requiring even higher quotas, the Amazon Bedrock Provisioned Throughput option offers a straightforward solution, though it didn’t align with Mend.io’s specific usage pattern in this case.

Although the initial estimate for classifying all 70,000 CVEs was lower, the final cost came in higher due to more complex input data resulting in longer input and output sequences. This highlighted the importance of comprehensive testing and benchmarking. The flexible pricing models in Amazon Bedrock allow organizations to optimize costs by considering alternative model options or data partitioning strategies, where simpler cases can be processed by more cost-effective models, while reserving higher-capacity models for the most challenging instances.

When working with advanced language models like those provided by AWS, it’s crucial to craft prompts that align precisely with the desired output format. In Mend.io’s case, their expectation was to receive straightforward YES/NO answers to their prompts, which would streamline subsequent data curation steps. However, the model often provided additional context, justifications, or explanations beyond the anticipated succinct responses. Although these expanded responses offered valuable insights, they introduced unanticipated complexity into Mend.io’s data processing workflow. This experience highlighted the importance of prompt refinement to make sure the model’s output aligns closely with the specific requirements of the use case. By iterating on prompt formulation and fine-tuning the prompts, organizations can optimize their model’s responses to better match their desired response format, ultimately enhancing the efficiency and effectiveness of their data processing pipelines.

Results

Despite the challenges Mend.io faced, their diligent efforts paid off. They successfully identified CVEs with attack requirement details, arming security teams with precious insights for prioritizing vulnerabilities and fortifying defenses. This outcome was a significant achievement, because understanding the specific prerequisites for a vulnerability to be exploited is crucial in assessing risk and developing effective mitigation strategies. By using the power of Anthropic Claude, Mend.io was able to sift through tens of thousands of CVE reports, extracting the nuanced information about attack requirements that would have been nearly impossible to obtain through manual analysis. This feat not only saved valuable time and resources but also provided cybersecurity teams with a comprehensive view of the threat landscape, enabling them to make informed decisions and prioritize their efforts effectively.

Mend.io conducted an extensive evaluation of Anthropic Claude, issuing 68,378 requests without considering any quota limitations. Based on their initial experiment of analyzing a sample of 100 vulnerabilities to understand attack vectors, they could determine the accuracy of Claude’s direct YES or NO answers. As shown in the following table, Anthropic Claude demonstrated exceptional performance, providing direct YES or NO answers for 99.9883% of the requests. In the few instances where a straightforward answer was not given, Anthropic Claude still provided sufficient information to determine the appropriate response. This evaluation highlights Anthropic Claude’s robust capabilities in handling a wide range of queries with high accuracy and reliability.

Character count of the prompt (without CVE specific details) 13,935
Number of tokens for the prompt (without CVE specific details) 2,733
Total requests 68,378
Unexpected answers 8
Failures (quota limitations excluded) 0
Answer Quality Success Rate 99.9883%

Future plans

The successful application of Anthropic Claude in identifying attack requirement details from CVE data is just the beginning of the vast potential that generative AI holds for the cybersecurity domain. As these advanced models continue to evolve and mature, their capabilities will expand, opening up new frontiers in automating vulnerability analysis, threat detection, and incident response. One promising avenue is the use of generative AI for automating vulnerability categorization and prioritization. By using these models’ ability to analyze and comprehend technical descriptions, organizations can streamline the process of identifying and addressing the most critical vulnerabilities, making sure limited resources are allocated effectively. Furthermore, generative AI models can be trained to detect and flag potential malicious code signatures within software repositories or network traffic. This proactive approach can help cybersecurity teams stay ahead of emerging threats, enabling them to respond swiftly and mitigate risks before they can be exploited.

Beyond vulnerability management and threat detection, generative AI also holds promise in incident response and forensic analysis. These models can assist in parsing and making sense of vast amounts of log data, network traffic records, and other security-related information, accelerating the identification of root causes and enabling more effective remediation efforts. As generative AI continues to advance, its integration with other cutting-edge technologies, such as ML and data analytics, will unlock even more powerful applications in the cybersecurity domain. The ability to process and understand natural language data at scale, combined with the predictive power of ML algorithms, could revolutionize threat intelligence gathering, enabling organizations to anticipate and proactively defend against emerging cyber threats.

Conclusion

The field of cybersecurity is continually advancing, the integration of generative AI models like Anthropic Claude, powered by the robust infrastructure of Amazon Bedrock, represents a significant step forward in advancing digital defense. Mend.io’s successful application of this technology in extracting attack requirement details from CVE data is a testament to the transformative potential of language AI in the vulnerability management and threat analysis domains. By utilizing the power of these advanced models, Mend.io has demonstrated that the complex task of sifting through vast amounts of unstructured data can be tackled with precision and efficiency. This initiative not only empowers security teams with crucial insights for prioritizing vulnerabilities, but also paves the way for future innovations in automating vulnerability analysis, threat detection, and incident response. Anthropic and AWS have played a pivotal role in enabling organizations like Mend.io to take advantage of these cutting-edge technologies.

Looking ahead, the possibilities are truly exciting. As language models continue to evolve and integrate with other emerging technologies, such as ML and data analytics, the potential for revolutionizing threat intelligence gathering and proactive defense becomes increasingly tangible.

If you’re a cybersecurity professional looking to unlock the full potential of language AI in your organization, we encourage you to explore the capabilities of Amazon Bedrock and the Anthropic Claude models. By integrating these cutting-edge technologies into your security operations, you can streamline your vulnerability management processes, enhance threat detection, and bolster your overall cybersecurity posture. Take the first step today and discover how Mend.io’s success can inspire your own journey towards a more secure digital future.


About the Authors

Hemmy Yona is a Solutions Architect at Amazon Web Services based in Israel. With 20 years of experience in software development and group management, Hemmy is passionate about helping customers build innovative, scalable, and cost-effective solutions. Outside of work, you’ll find Hemmy enjoying sports and traveling with family.

Tzahi Mizrahi is a Solutions Architect at Amazon Web Services, specializing in container solutions with over 10 years of experience in development and DevOps lifecycle processes. His expertise includes designing scalable, container-based architectures and optimizing deployment workflows. In his free time, he enjoys music and plays the guitar.

Gili Nachum is a Principal solutions architect at AWS, specializing in Generative AI and Machine Learning. Gili is helping AWS customers build new foundation models, and to leverage LLMs to innovate in their business. In his spare time Gili enjoys family time and Calisthenics.

Maciej Mensfeld is a principal product architect at Mend, focusing on data acquisition, aggregation, and AI/LLM security research. He’s the creator of diffend.io (acquired by Mend) and Karafka. As a Software Architect, Security Researcher, and conference speaker, he teaches Ruby, Rails, and Kafka. Passionate about OSS, Maciej actively contributes to various projects, including Karafka, and is a member of the RubyGems security team.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Anthropic Claude Amazon Bedrock CVE 网络安全 人工智能
相关文章