少点错误 2024年12月23日
Funding Case: AI Safety Camp 11
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

AI Safety Camp (AISC) 是一个旨在培养人工智能安全领域人才的孵化器和人才通道。它帮助有经验的研究人员建立新的合作关系,并支持有才华的新人通过实践学习。该项目已成功孵化10个组织,并为43名校友提供了人工智能安全领域的工作。AISC不仅支持传统的对齐方向,还鼓励对控制限制研究、法律法规以及“减缓AI”倡议等方面的探索。该项目以其成本效益高和可扩展性强而著称,通过提供小额资助、导师指导和项目协作,有效地推动了人工智能安全领域的发展。目前,由于资金紧张,AISC正在寻求捐助以维持其项目运营和扩大规模。

💡AI安全营的核心在于通过实践学习培养人才,它既是孵化器,帮助研究人员建立合作,也是人才通道,让新人通过具体项目学习。

🌱AISC支持多种研究方向,包括新的对齐策略、控制限制、法律法规以及“减缓AI”倡议,为应对AGI挑战探索多元化方案。

💰AISC的资金需求分为不同层次:1.5万美元可支持10个项目并维持组织能力;4万美元可举办第11届,支持25个项目;7万美元可聘请第三位组织者,支持35个项目;30万美元可为40个项目提供津贴,助力研究人员专注于研究。

🚀AISC项目成果显著,校友创办了多个组织,并在知名机构担任重要职位,累计获得超过95.2万美元的资助,证明了其在AI安全领域的巨大影响力。

🎯AISC致力于支持具有安全意识的人才进入相关岗位,并强调其成本效益高和可扩展性强,能够以较低的成本推动AI安全领域的发展,并保留了未来机构加大投入时的选择权。

Published on December 23, 2024 8:51 AM GMT

Project summary

AI Safety Camp has a seven-year track record of enabling participants to try their fit, find careers and start new orgs in AI Safety. We host up-and-coming researchers outside the Bay Area and London hubs.

If this fundraiser passes…

What are this project's goals? How will you achieve them?

By all accounts they are the gold standard for this type of thing. Everyone says they are great, I am generally a fan of the format, I buy that this can punch way above its weight or cost. If I was going to back [a talent funnel], I’d start here.
Zvi Mowshowitz


AI Safety Camp is part incubator and part talent funnel:

The Incubator case is that AISC seeds epistemically diverse initiatives. Edition 10 supports new alignment directions, control limits research, neglected legal regulations, and 'slow down AI' advocacy. Funders who are uncertain about approaches to alignment – or believe we cannot align AGI on time – may prioritise funding this program.

The Maintaining Talent Funnels case is to give some money just to sustain the program. AISC is no longer the sole program training collaborators new to the field. There are now many programs, and our community’s bottlenecks have shifted to salary funding and org management. Still, new talent will be needed. For them, we can run a cost-efficient program. Sustaining this program retains optionality – institutions are waking up to AI risks and could greatly increase funding and positions there. If AISC still exists, it can help funnel people with a security mindset into those positions. But if by then organisers have left to new jobs, others would have to build AISC up from scratch. The cost of restarting is higher than it is to keep the program running.

As a funder, you may decide that AISC is worth saving as a cost-efficient talent funnel. Or you may decide that AISC is uniquely supports unconventional approaches, and that something unexpectedly valuable may come out. 

Our program is both cost-efficient and scalable.

 

How will this funding be used?

Grant funding is tight. Without private donors, we cannot continue this program.

 

$15k: we won’t run a full program, but can facilitate 10 projects and preserve organising capabilities.

If we raise $15k, we won't run a full official edition. 

We can still commit to facilitating projects. Robert and Remmelt are already supporting projects in their respective fields of work. Robert has collaborated with other independent alignment researchers, as well as informally mentoring junior researchers doing conceptual and technical research on interpretable AI. Remmelt is kickstarting projects to slow down AI (eg. formalization work, MILD, Stop AI, inter-community calls, film by an award-winning director). 

We might each just support projects independently. Or we could (also) run an informal event where we only invite past alumni to collaborate on projects together. 

We can commit to this if we are freed from needing to transition to new jobs in 2025. Then we can resume full editions when grantmakers make more funds available. With a basic income of $18k each, we can commit to starting, mentoring, and/or coordinating 10 projects. 
 

$40k: we can organise the 11th edition, for 25 projects.

Combined with surplus funds from past camps (conservatively estimated at $21k), this covers salaries to Robert and Remmelt of $30.5k each. 

That is enough for us to organise the 11th edition. However, since we’d miss a third organiser, we’d only commit to hosting 25 projects. 
 

$70k: we can pay a third organiser, for 35 projects.

With funding, we are confident that we can onboard a new organiser to trial with us. They would assist Robert with evaluating technical safety proposals, and help with event ops. This gives us capacity to host 35 projects.
 

$300k: we can cover stipends for 40 projects.

Stipends act as a commitment device, and enable young researchers to focus on research without having to take on side-gigs. We only offer stipends to participants who indicate it would help their work. Our stipends are $1.5k per research lead and $1k per team member, plus admin fees of 9%.

We would pay out stipends in the following order:

The $230k extra safely covers stipends for edition 11. This amount may seem high, but it cost-efficiently supports 150+ people's work over three months. This in turn reduces the load on us organisers, allowing us to host 40 projects.
 

Who is on your team? 

Remmelt is coordinator of 'Stop/Pause AI'  projects:


Robert is coordinator of 'Conceptual and Technical AI Safety Research' projects:

Linda will take a break from organising, staying on as an advisor. We can hire a third organiser to take up her tasks.
 

What's your track record?

AI Safety Camp is primarily a learning-by-doing training program. People get to try a role and explore directions in AI safety, by collaborating on a concrete project.

Multiple alumni have told us that AI Safety Camp was how they got started in AI Safety.

Papers that came out of the camp include:

Projects started at AI Safety Camp went on to receive a total of $952k in grants:
  AISC 1:  Bounded Rationality team    
    $30k from Paul
  AISC 3:  Modelling Cooperation
    $24k from CLT, $50k from SFF, $83k from SFF, $83k from SFF
  AISC 4:  Survey      
    $5k from LTTF
  AISC 5:  Pessimistic Agents      
    $3k from LTFF
  AISC 5:  Multi-Objective Alignment
    $20k from EV
  AISC 6:  LMs as Tools for Alignment
    $10K from LTFF
  AISC 6:  Modularity
    $125k from LTFF
  AISC 7:  AGI Inherent Non-Safety
    $170k from SFF, $135k from SFF 
    AISC 8:  Policy Proposals for High-Risk AI     
    $10k from NL, $184k from SFF
    AISC 9:  Data Disclosure
      $10k from SFFsg
    AISC 9:  VAISU
      $10k from LTFF

Organizations launched out of camp conversations include:

Alumni went on to take positions at:

For statistics of previous editions, see here.
 

What are the most likely causes and outcomes if this project fails?

Not receiving minimum funding:

Projects are low priority:

Projects support capability work:

How much money have you raised in the last 12 months, and from where?



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全营 人才培养 项目资助 人工智能安全 孵化器
相关文章