少点错误 03月12日
Forethought: a new AI macrostrategy group
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Forethought是由Max Dalton、Will MacAskill等人联合创立的人工智能宏观战略研究小组,致力于探索如何应对向超智能AI系统过渡的潜在挑战。该组织关注AGI可能带来的快速技术进步,以及由此产生的社会问题。Forethought的研究方向包括为智能爆炸做准备、实现近乎最佳的未来,并重视AI对人类社会带来的长期影响。他们通过研究、讨论和发布成果,旨在为未来发展提供有价值的思考和策略。

💥Forethought是一个专注于AI宏观战略研究的新兴组织,由Max Dalton等人联合创立,旨在探索和应对超智能AI系统快速发展带来的挑战。

🚀该组织的研究议程主要围绕两个核心方向展开:一是为人工智能驱动的爆炸性增长做好准备,应对包括大规模杀伤性武器管理、数字生命权利、自动化军事治理以及避免独裁或专制等挑战;二是致力于实现一个接近最佳的未来,探索理想的“viatopia”形态,并解决太空资源分配、AI认知和劝说等问题。

💡Forethought强调其研究的独特性,包括关注爆炸性增长和短期时间线、研究当前Overton窗口之外的问题,以及关注超越AI对齐的议题。他们致力于与智库、公司和政府中思考AI的专业人士分享研究成果,保持开放心态,鼓励研究人员追求各自认为重要和有价值的研究方向。

Published on March 11, 2025 3:39 PM GMT

Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar.

We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window.

More details on our website.

Why we exist

We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared.

Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future.

Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization.

Research

Research agendas

We are currently pursuing the following perspectives:

We tend to think that many non-alignment areas of work are particularly neglected.

However, we are not confident that these are the best frames for this work, and we are keen to work with people who are pursuing their own agendas.

Recent work

Today we’re also launching “Preparing for the Intelligence Explosion”, which makes a more in-depth case for some of the perspectives above.

You can see some of our other recent work on the site. We have a backlog of research, so we’ll be publishing something new every few days for the next few weeks.

Approach

Comparison to other efforts

We draw inspiration from the Future of Humanity Institute and from OpenPhil’s Worldview Investigations team: like them we aim to focus on big picture important questions, have high intellectual standards, and build a strong core team.

Generally, we’re more focused than many existing organizations on:

Principles

    Stay small: We aim to hire from among the handful of people who have the best records of tackling hard questions about AI futures, to offer a supportive institutional home to such people, and to grow slowly.Communicate to the nerds: We will mostly share research and ideas with wonk-y folks thinking about AI in think tanks, companies, and government, rather than working directly with policymakers. We plan to be thoughtful about how best to communicate and publish, but likely on our website and in arXiv papers.Be open to “weird” ideas:  The most important ideas in history often seemed strange or even blasphemous at the time. And rapid AI-driven technological progress would mean that many issues that seem sci-fi are really quite pressing. We want to be open to ideas based on their plausibility and importance, not on whether they are within the current Overton window.Offer intellectual autonomy: Though we try to focus on what's most important, there are many different reasonable views on what that is. Senior researchers in particular are encouraged to follow their instinct on what research avenues are most important and fruitful, and to publish freely. There isn't a "party line" on what we believe.

What you can do

Engage with our research

We’d love for you to read our research, discuss the ideas, and criticize them! We’d also love to see more people working on these topics.

You can follow along by subscribing to our podcast, RSS feed, or Substack.

Please feel free to contact us if you are interested in collaborating, or would like our feedback on something (though note that we won’t be able to substantively engage with all requests).

Apply to work with us

We are not currently actively hiring (and will likely stay quite small), but we have an expression of interest form on our site, and would be particularly keen to hear from people who have related research ideas that they would like to pursue.

Funding

We have funding through to approximately March 2026 at our current size, from two high-net-worth donors.

We’re looking for $1-2M more, which would help us to diversify funding, make it easier for us to hire more researchers, and extend our runway to 2 years. If you are interested to learn more, please contact us.

  1. ^

     We are a new team and project, starting in mid-2024. However, we’ve built ourselves out of the old Forethought Foundation for Global Priorities Research to help get the operations started, and Will was involved with both projects. We considered like 500 names and couldn’t find something that we liked better than “Forethought”. Sorry for the confusion!



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Forethought AI宏观战略 智能爆炸 AGI 未来研究
相关文章