少点错误 2024年11月26日
AI & Liability Ideathon
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

AI责任创意马拉松将于2024年12月7日举行,为期两周。该活动旨在召集律师、研究人员和开发者,共同探讨AI责任问题,并提出解决方案。参与者将组成团队,共同构思、开发和完善创意,最终在展示之夜分享提案。所有提案都将发布在AI-Plans平台上,前三名将通过同行评审选出。本次活动主要在AI-Plans Discord服务器上进行,欢迎所有对AI安全和责任问题感兴趣的人士参与,包括学生、学者、律师、AI/ML工程师等。

🤔 **活动概述:** 这是一个为期两周的AI责任创意马拉松,旨在解决AI责任问题,包括技术方案、政策方案或两者结合。

📅 **活动时间及安排:** 包括注册组队、启动会议、提案截止日期、最终提案截止日期以及展示投票等环节,具体时间安排已在文章中列出。

👥 **参与对象:** 欢迎所有对AI安全和责任问题感兴趣的人士参与,包括学生、学者、律师、AI/ML工程师等,旨在汇聚多学科知识和经验。

💡 **潜在创意示例:** 包括构建不同类型系统责任框架、探索赋予AI系统法律人格、利用跨编码器识别注意义务、将AI代理视为分包商以及绘制AI开发部署流程图等。

👨‍🏫 **导师及演讲嘉宾:** 活动邀请了多位来自法律、AI研究领域的专家作为导师和演讲嘉宾,例如Gabriel Weil、Tzu Kit Chan、Sean Thawe和Kabir Kumar等,为参与者提供指导和支持。

Published on November 26, 2024 1:54 PM GMT

Overview

Join us for the AI & Liability Ideathon, a two-week event on December 7, 2024, at 3:00 PM BST. 
Join lawyers, researchers and developers to create solutions for AI Liability.  Propose, develop and refine ideas with a team, ending in a presentation evening where you can share the final version of your proposal. 

All the final proposals will be published on AI-Plans, with the top 3 being selected by peer review after the presentation evening.

The presentation evening is open to everyone, including those who didn't take part in the Ideathon. 

The Ideathon, including the Presentation Evening, the Speakers and the Kick Off Call will be primarily taking place in the AI-Plans Discord: https://discord.gg/X2bsw8FG3f

What is an Ideathon? 

An Ideathon is a brainstorming event designed to allow individuals to combine collective multidisciplinary knowledge, experience, and creativity to tackle specific topics.. Participation is open to all interested individuals, including students, academics, civil society, non-profit organizations, lawyers, law professors, AI/ML engineers, developers, and product leaders. All are welcome, including those interested in AI safety and liability issues. 

For this AI Liability Ideathon, team proposals may be technology-based, policy-based, a combination of both, or otherwise related to the topic.

 

Examples of Potential Ideas:

 

Why Participate?

If you're a lawyer: 

If you're an AI Researcher: 

The central question of who holds what responsibility in the AI Development pipeline is of ever growing importance. This is a chance to dive deep into the specific details of how to split it fairly and create solutions that might change the world.

Schedule: 

Present to December 7th: Registration & Team Forming

This is the period to register for the Ideathon and to start forming or joining a team if you have not already done so. Consider whom you might need on your team and try to recruit them. Think about the kinds of ideas you might want to develop during the Ideathon.

Mentors will be available to help you find a team or members if you need it. You can introduce yourself and share your proposal ideas in the Discord.

 

December 7th: Kick-Off Call and Q&A

We'll begin The Ideation with a brief talk from Kabir, Sean and Khullani and open up for a Q&A
This isn't mandatory by any means, but just an intro to the event and a chance to ask questions.

December 11th: Deadline for deciding your idea

By this point teams should decide which idea they'll be focusing on developing. Not a hard set deadline, but a strong recommendation.

December 14th: Deadline for sharing 1st draft

Teams should share the first draft of their idea - can be a couple sentences or can be several pages, just does need to clearly explain what the idea is and why they've chosen it. If it's multiple pages, we recommend having a summary/abstract at the start.
The draft can be updated and it's updates shared in the share-your-work channel on the discord, as the Ideathon goes on. You can start considering how you want to present the idea.

December 20th: Deadline for final proposal

Now, the final, refined version of the idea your team has worked on should be ready. It should be clearly written up, with perhaps an exploration into an implementation (though that isn't necessary). If you haven't already started, then you should get ready to present the idea, for the next day. 

 

December 21st, 4pm BST: Presentation & Voting

We'll culminate in an evening of teams presenting and sharing their ideas on a call. Organizers will reach out beforehand about scheduling. If this time doesn't work for a team, they're welcome to submit a pre-recorded video.
Everyone will have a chance to vote on the ideas that are their favourites. This will be streamed online, with teams having the option of uploading their own video explaining their idea.

Speakers & Collaborators:

​Speakers: 

 

Gabriel Weil 
Assistant Professor of Law, Tourou University


Professor Weil’s research has addressed geoengineering governance, tools for overcoming the global commons problem, and the optimal role for subnational policy in tackling a global problem, among other topics.

Tzu Kit Chan
Operations at ML Alignment & Theory Scholars

Among doing many other things, Tzu does Operations at MATS, co-founded Caltech AI Alignment, runs Stanford AI Alignment and advises as a board member for Berkeley’s AI Safety

Mentors:

Sean Thawe
Co-founder/Software Developer, AI-Plans

Sean does mechanistic interpretability research and software development at AI-Plans. He's taken part in an ideathon with his team for the Deep Learning Indaba which happened recently in October and November. Sean also works on data science and software engineering at Mindbloom AI as a consultant/researcher.

Kabir Kumar

Co-founder/Organizer, AI-Plans

Kabir has run several successful events, such as the Critique-a-Thons and Law-a-Thons and does mechanistic interpretability and evals research at AI-Plans.

If you are interested in supporting as a mentor, speaker or judge – please register your interest here:

https://forms.gle/iACDJb4CE725k9bk7 

 

Resources

Beckers, A., & Teubner, G. (2022). Three liability regimes for artificial intelligence. Goethe University Frankfurt. Retrieved from https://www.jura.uni-frankfurt.de/131542927/BeckersTeubnerThree_Liability_Regimes_for_Artificial_Intelligence2022.pdf

Madiega, T. (2023). Artificial intelligence liability directive (Briefing No. PE 739.342). European Parliamentary Research Service. https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/739342/EPRS_BRI(2023)739342_EN.pdf

_____



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI责任 创意马拉松 法律 人工智能 开发者
相关文章