Astral Codex Ten Podcast feed 2024年07月17日
Why Not Slow AI Progress?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了人工智能领域中的一个重要问题:AI能力研究与AI安全研究的混淆。作者提出,如同化石燃料社区中石油公司与环保活动家的关系,AI领域也需要明确区分这两者,以防未对齐的AI带来世界末日的风险。目前的AI安全团队与能力公司之间存在界限模糊,作者呼吁应该对AI公司持更加审慎的态度。

🤖 AI能力研究旨在开发更强大的人工智能,而AI安全研究则专注于防止AI变得危险。目前,这两者在实际操作中常常界限模糊。

🌍 在AI领域,DeepMind和OpenAI这样的公司既进行AI能力的研究也涉及AI安全,这种模式存在潜在风险。

🚫 有观点认为,我们应该学习环保活动家的做法,对AI能力研究持更加批判的态度,甚至推动政府进行监管。

📉 文章指出,某些政府监管失败的例子,如欧盟对转基因作物的禁令或美国对核能的限制,但这些监管确实有效阻止了相关行业的发展。

🛑 最后,作者强调,我们应该延缓超级智能AI的发展,直到我们有确切的办法确保其安全性。

Machine Alignment Monday 8/8/22

https://astralcodexten.substack.com/p/why-not-slow-ai-progress

The Broader Fossil Fuel Community

Imagine if oil companies and environmental activists were both considered part of the broader “fossil fuel community”. Exxon and Shell would be “fossil fuel capabilities”; Greenpeace and the Sierra Club would be “fossil fuel safety” - two equally beloved parts of the rich diverse tapestry of fossil fuel-related work. They would all go to the same parties - fossil fuel community parties - and maybe Greta Thunberg would get bored of protesting climate change and become a coal baron.

This is how AI safety works now. AI capabilities - the work of researching bigger and better AI - is poorly differentiated from AI safety - the work of preventing AI from becoming dangerous. Two of the biggest AI safety teams are at DeepMind and OpenAI, ie the two biggest AI capabilities companies. Some labs straddle the line between capabilities and safety research.

Probably the people at DeepMind and OpenAI think this makes sense. Building AIs and aligning AIs could be complementary goals, like building airplanes and preventing the airplanes from crashing. It sounds superficially plausible.

But a lot of people in AI safety believe that unaligned AI could end the world, that we don’t know how to align AI yet, and that our best chance is to delay superintelligent AI until we do know. Actively working on advancing AI seems like the opposite of that plan.

So maybe (the argument goes) we should take a cue from the environmental activists, and be hostile towards AI companies. Nothing violent or illegal - doing violent illegal things is the best way to lose 100% of your support immediately. But maybe glare a little at your friend who goes into AI capabilities research, instead of getting excited about how cool their new project is. Or agitate for government regulation of AI - either because you trust the government to regulate wisely, or because you at least expect them to come up with burdensome rules that hamstring the industry. While there are salient examples of government regulatory failure, some regulations - like the EU’s ban on GMO or the US restrictions on nuclear power - have effectively stopped their respective industries.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 AI安全 政府监管 能力研究
相关文章