MarkTechPost@AI 2024年07月30日
NIST Releases a Machine Learning Tool for Testing AI Model Risks
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

NIST开发Dioptra以确保AI模型的可信度和安全性,解决现有评估方法的局限性

🎯Dioptra是一个全面的软件平台,用于评估AI的可信度特征。它有助于NIST AI风险管理框架的Measure功能,提供评估、分析和跟踪AI风险的工具,以促进创建有效、可信、安全、有保障和开放的AI系统。

🛠️Dioptra基于微服务架构,可在从本地笔记本电脑到具有高计算资源的分布式系统的各种规模上进行部署。其核心组件是测试平台API,可管理用户请求和交互,平台使用Redis队列和Docker容器处理实验任务,确保了模块性和可扩展性。

💻Dioptra的插件系统允许集成现有的Python包并开发新功能,具有可扩展性。其模块化设计支持不同数据集、模型、攻击和防御的组合,能够进行全面评估。该平台的新颖之处在于通过创建资源快照并跟踪实验和输入的完整历史,实现了可重复性和可追溯性。

🌐Dioptra的交互式Web界面和多租户部署功能进一步提高了其可用性,允许用户共享和重用组件。

The rapid advancement and widespread adoption of AI systems have brought about numerous benefits but also significant risks. AI systems can be susceptible to attacks, leading to harmful consequences. Building reliable AI models is difficult due to their often opaque inner workings and vulnerability to adversarial attacks, such as evasion, poisoning, and oracle attacks. These attacks can manipulate data to degrade model performance or extract sensitive information, necessitating robust methods to evaluate and mitigate such threats.

Existing methods for evaluating AI security and trustworthiness focus on specific attacks or defenses without considering the broader range of possible threats. These methods can lack reproducibility, traceability, and compatibility, making it difficult to compare results across different studies and applications. The National Institute of Standards and Technology (NIST) has developed Dioptra to address the challenge of ensuring the trustworthiness and security of artificial intelligence (AI) models. 

Dioptra is a comprehensive software platform that evaluates the trustworthy characteristics of AI. Dioptra helps the NIST AI Risk Management Framework’s Measure function by giving tools to evaluate, analyze, and keep track of AI risks. This encourages the creation of valid, trustworthy, safe, secure, and open AI systems. It aims to solve limitations faced by existing models by providing a standardized platform for evaluating the trustworthiness of AI systems.

Dioptra is built on a microservices architecture that enables its deployment on various scales, from local laptops to distributed systems with high computational resources. The core component is the testbed API, which manages user requests and interactions. The platform uses a Redis queue and Docker containers to handle experiment jobs, ensuring modularity and scalability. Dioptra’s plugin system allows for the integration of existing Python packages and the development of new functionalities, promoting extensibility. The platform’s modular design supports the combination of different datasets, models, attacks, and defenses, enabling comprehensive evaluations. Its novel features are reproducibility and traceability enabled by creating snapshots of resources and tracking the full history of experiments and inputs. Dioptra’s interactive web interface and multi-tenant deployment capabilities further enhance its usability, allowing users to share and reuse components.

In conclusion, NIST addresses the limitations of existing methods by enabling comprehensive assessments under diverse conditions, promoting reproducibility and traceability, and supporting compatibility between different components. By facilitating detailed evaluations of AI defenses against a wide array of attacks, Dioptra helps researchers and developers better understand and mitigate the risks associated with AI systems. This makes Dioptra a valuable tool for ensuring the reliability and security of AI in various applications.


Check out the Details and Blog. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. If you like our work, you will love our newsletter..

Don’t Forget to join our 47k+ ML SubReddit

Find Upcoming AI Webinars here

The post NIST Releases a Machine Learning Tool for Testing AI Model Risks appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

NIST Dioptra AI模型风险 可信度评估
相关文章