少点错误 2024年11月27日
Call for evaluators: Participate in the European AI Office workshop on general-purpose AI models and systemic risks
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

欧盟AI办公室征集有关通用AI模型系统性风险评估的论文摘要,活动面向特定组织和研究团体,探讨多个关键风险话题,明确申请流程及关键日期。

📌欧盟AI办公室征集评估相关论文摘要

📌活动面向注册组织和大学附属研究团体

📌探讨CBRN风险等多个关键风险话题

📌明确申请流程及12月的关键日期

Published on November 27, 2024 2:54 AM GMT

I am sharing this call from the EU AI Office for organizations involved in evaluation. Please take a close look: among the selection criteria, organizations must be based in Europe, or their leader must be European. If these criteria pose challenges for some of you, feel free to reach out to me at strong>tom@prism-eval.ai</strong. We can explore potential ways to collaborate through PRISM Eval. I believe it’s crucial that we support one another on these complex and impactful issues.

 

The AI office is collecting contributions from experts to feed into the workshop on general-purpose AI models and systemic risks.

The European AI Office is hosting an online workshop on 13 December 2024 (only for specialists), focusing on the evaluation of general-purpose AI models with systemic risk. This is an opportunity for organisations and research groups to showcase their expertise and contribute to shaping the evaluation ecosystem under the EU AI Act.

The event will bring together leading evaluators and the AI Office to exchange insights on state-of-the-art evaluation methodologies for general-purpose AI models. Selected participants will present their approaches, share best practices, and discuss challenges in assessing systemic risks associated with advanced AI technologies.

This initiative aims to foster collaboration and advance the science of general-purpose AI model evaluations, contributing to the development of robust frameworks for ensuring the safety and trustworthiness of these models.

Call for submissions

The AI Office invites evaluators to submit abstracts of previously published papers on the evaluation of general-purpose AI models with systemic risk. Key topics include:

Follow the link to take part of the call. Find more information on the application procedure (PDF).

Eligibility and selection

Eligible applicants must be registered organisations or university-affiliated research groups with demonstrated experience in general-purpose AI model evaluations. Submissions will be evaluated based on technical quality, relevance, and alignment with the AI Office's mission.

Key dates

Background

The AI Act establishes rules to ensure general-purpose AI models are safe and trustworthy, particularly those posing systemic risks such as facilitating biological weapons development, loss of control, or large-scale harm like discrimination or disinformation. Providers of these models must assess and mitigate risks, conduct adversarial testing, report incidents, and ensure cybersecurity of the model.

The European AI Office enforces these requirements, conducting evaluations, investigating systemic risks, and imposing fines when necessary. It can also appoint independent experts to carry out evaluations on its behalf.

As the science of systemic risk evaluation is still developing, the AI Office is fostering collaboration with evaluators to advance methodologies and establish best practices. Workshops, like the upcoming December 2024 event, support this effort, building a foundation for safe and responsible AI oversight.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

欧盟AI办公室 通用AI模型 系统性风险 评估活动
相关文章