少点错误 2024年08月28日
Leverage points for a pause
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了预防危险AI发展的方法,提出了一个四级框架,分析了AI在各层面的扩展及危害,还提到了社区为阻止有害AI采取的行动及长期的综合限制措施。

🎯AI的扩展需四个方面:数据、工作、用途、硬件。在每个层面,AI从提取的资源中扩展,但也给人们带来越来越多的危害,如传播假消息、使工作场所非人化、破坏社会稳定、污染自然环境等。

💪社区为阻止有害AI采取行动,如资助创意者和隐私倡导者的诉讼以保护数据权利,为工会提供媒体支持以协商合同,倡导审计员有权阻止不安全的AI产品。

🚫长期来看,社区可努力实现综合限制措施,如禁止数字监控、禁止多任务机器人、禁止自主使用未经测试的机器、禁止过度硬件处理等。

Published on August 28, 2024 9:21 AM GMT

What are ways to prevent development of dangerous AI? 

When I started on this question two years ago, I expected that passing laws to ban dangerous architectures was the way to go. Then I learned of many new ways from other concerned communities. It was overwhelming. 
 

Here’s a four-level framework I found helpful for maintaining an overview.


Four things need to be available to scale AI:

    Data (inputs received from the world)Work (functioning between domains)Uses (outputs expressed to the world)Hardware (computation of inputs into outputs)
     

At each level, AI gets scaled from extracted resources:

    A machine programs searched-for data into code to predict more data.Workers design this machine to cheaply automate out more workers.Corporations sink profit into working machines for more profitable uses.Markets produce infrastructure for the production of more machines.
     

At each level, AI scaling is increasingly harming people:

    Disconnected person
    Bots feed on our online data to spread fake posts between persons.Dehumanised workplace
    Bots act as coworkers until robots sloppily automate our workplace.Destabilised society
    Robot products are hyped up and misused everywhere over society.Destroyed environment
    Robots build more machines that slurp energy and pollute nature.
     

Communities are stepping up now to stop harmful AI. You can support their actions. For example, you can fund lawsuits by creatives and privacy advocates to protect their data rights. Or give media support for unions to negotiate contracts so workers aren’t forced to use AI. Or advocate for auditors having the power to block unsafe AI products. 


Over the long term, our communities can work towards comprehensive restrictions:

    Digital surveillance ban: no machine takes input data from us, or from any spaces we are in, without our free express consent.Multi-job robots ban:  no machine learns more than one job function and only then with workers’ free express consent.Autonomous use ban: no machine outputs to where we live, if not tested and steered by local humans in the loop.Excess hardware ban: no machine can process more than just the data humans curate for scoped uses.
     

I noticed there are ways to prevent harms and risks at the same time. Communities with diverse worldviews can act in parallel – to restrict how much data, work, uses, and hardware is available for scaling AI. While hard, it's possible to pause AI indefinitely.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

危险AI 预防措施 社区行动 综合限制
相关文章