少点错误 2024年12月23日
What is compute governance?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

计算治理是AI治理的一种,聚焦于控制发展和运行AI所需的计算硬件的访问。虽被认为颇具前景,但相关政策和研究尚在探索阶段,文中还提到了其可用于的三个主要方面及一些策略。

🎯计算治理可用于提高政策制定者对AI的了解,如利用公共信息估计计算使用情况等。

🎁通过影响不同项目的计算可获得量来进行分配,包括为特定研究提供计算等策略。

🛡️确保相关参与者遵守规则来进行执行,如限制芯片网络能力等策略。

Published on December 23, 2024 6:32 AM GMT

This is an article in the featured articles series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback

The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety.

Compute governance is a type of AI governance that focuses on controlling access to the computing hardware needed to develop and run AI. It has been argued that regulating compute is particularly promising compared to regulating other inputs to AI progress, such as data, algorithms, or human talent.

Although compute governance is one of the more frequently proposed strategies for AI governance, as of November 2024, there are few policies in place for governing compute, and much of the research on the topic is exploratory. Currently-enforced measures related to compute governance include US export controls on advanced microchips to China and reporting requirements for large training runs in the US and EU.

According to Sastry et al., compute governance can be used toward three main ends:

    Visibility is the ability of policymakers to know what’s going on in AI, so they can make informed decisions. The amount of compute used for a training run can be used as information about the capabilities and risk of the resulting system. Measures to improve visibility could include:
      Using public information to estimate compute used.Requiring AI developers and cloud providers to report large training runs.Creating an international registry for AI chips.Designing systems to monitor general workload done by AI chips while preserving privacy about sensitive information.
    Allocation refers to policymakers influencing the amount of compute available to different projects. Strategies in this category include:
      Making compute available for research toward technologies that increase safety and defensive capabilities, or that substitute for more dangerous alternatives.Speeding up or slowing down the general rate of AI progress.Restricting or expanding the range of countries or groups with access to certain systems.Creating an international megaproject aimed at developing AI technologies — such proposals are sometimes called “CERN for AI”.
    Enforcement is about policymakers ensuring that the relevant actors abide by their rules. This could potentially be enabled by the right kind of software or hardware; hardware-based enforcement is likely to be harder to circumvent. Strategies here include:
      Restricting networking capabilities to make chips harder to use in very large clusters.Modifying chips to add cryptographic mechanisms to automatically verify or enforce restrictions on what types of tasks these chips are allowed to be used for.Designing chips so that they can be controlled multilaterally, similar to “permissive action links” for nuclear weapons.Restricting access to compute through, for instance, cloud compute providers.

Many of these mechanisms are speculative and would require further research before they could be implemented. They could end up being risky or ineffective. However, many safety researchers think compute governance would help avert major existential risks to humanity.

Further reading:



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

计算治理 AI治理 政策制定 风险规避
相关文章