少点错误 2024年10月06日
Seeking AI Alignment Tutor/Advisor: $100–150/hr
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

作者积极寻找AI x-risk领域的导师,旨在共同探讨降低AI存在风险的有效方法。作者想深入理解相关模型,探索AGI可能发展失调的原因及解决策略,还提及自身情况及对导师的要求等。

🎯作者认为一些高理性和智慧的人对AI相关存在性灾难的概率估计较高,自己想通过了解相关模型来完善思考并做出更明智的决策,以降低AI X-risk。

💻作者有7年机器学习经验,专注于大语言模型,认为即使自己对AI存在风险的概率估计为1%,也会将其视为最重要的问题并投入时间。

📋作者对导师的理想要求包括熟悉AI存在风险、对完善相关心理模型有兴趣、对AI存在风险的概率估计高于25%以及较强的人际兼容性等。

Published on October 5, 2024 9:28 PM GMT

I am actively looking for a tutor/advisor with expertise in AI x-risk, with the primary goal of collaboratively determining the most effective ways I can contribute to reducing AI existential risks (X-risk).

Tutoring Goals

I suspect that I misunderstand key components of the mental models that lead some highly rational and intelligent individuals to assign a greater than 50% probability of AI-related existential catastrophe ("p-doom"). By gaining a clearer understanding of these models, I aim to refine my thinking and make better-informed decisions about how to meaningfully reduce AI X-risk.

Specifically, I want to delve deeper into why and how misaligned AGI might be developed, and why it wouldn’t be straightforward to solve alignment before it becomes a critical issue.

To clarify, I do NOT believe we could contain or control a misaligned AGI with current safety practices. What I do find likely is that we will be able to avoid a situation altogether.

In addition to improving my understanding of AI X-risks, I also seek to explore strategies that I could aid in implementing in order to reduce AI X-risk.

About Me

- My primary motivation is effective altruism, and I believe that mitigating AI X-risk is the most important cause to work on.
- I have 7 years of experience working with machine learning, with a focus on large language models (LLMs), and possess strong technical knowledge of the field.
- My current p-doom estimate is 25%, derived from my own model, which gives about 5%, but I adjust upward in since some highly rational thinkers predicts significantly higher p-doom. Even if my p-doom were 1%, I would still view AI X-risk as the most pressing issue and dedicate my time to it.
 
Why Become My Tutor?

- You will be directly contributing to AI safety/alignment efforts, working with someone highly committed to making an impact.
- Opportunity for highly technical 1-on-1 discussions about the cutting-edge in AI alignment and X-risk reduction strategies.
- Compensation: $100–150 per hour (negotiable depending on your experience).

Ideal Qualifications

- Deep familiarity with AI existential risks and contemporary discussions surrounding AGI misalignment.
- A genuine interest in refining mental models related to AI X-risk and collaborating on solutions.
- p-doom estimate above 25%, since I aim to understand high p-doom perspectives.
- Strong interpersonal compatibility: It’s crucial that we both find these discussions rewarding and intellectually stimulating.

Structure & Logistics

- Weekly one-hour meetings focused on deep discussions of AI X-risk, strategic interventions, and mental model refinement.
- Flexible arrangement: you can invoice my company for the tutoring services.

How to Apply

If this opportunity sounds appealing to you, or if you know someone who may be a good fit, please DM me here on LessWrong.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI x-risk 降低风险 导师需求
相关文章