a16z 02月19日
Regulate AI Use, Not AI Development
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了人工智能监管的核心问题,强调应侧重于AI的应用而非其开发。文章指出,对AI模型开发进行监管会损害初创企业,而关注AI的潜在有害用途,与技术监管的历史原则相符。文章主张,政府应通过执行现有法律,追究不当使用AI者的责任,而非制定限制AI创新的新法。这种方法既能保护消费者,又能促进AI领域的健康发展,避免大型企业垄断,确保小企业也能参与竞争。

⚖️ 传统技术监管侧重于技术的使用而非技术本身,例如,法律不会规定计算机的制造方式,但会追究使用计算机犯罪或损害消费者的责任。

🛡️ 监管AI模型开发会阻碍创新,特别是对于资源有限的初创企业(Little Tech)而言,这会加剧大型科技公司在AI领域的竞争优势,最终减少消费者的选择。

🚨 现行法律已涵盖了大部分潜在的AI滥用行为,因此,与其制定新的监管法规,不如加强现有法律的执行力度,确保违法者承担责任。

🏛️ 政府应投入资源,提升执法部门的技术能力,以便更好地应对AI犯罪,同时加强各部门之间的协调与信息共享,从而更有效地监管AI的应用。

Governments have long regulated technology based on how it’s used, not how it’s made. No single law regulates how computers are built, for instance, but if a person uses a computer to commit a crime, or a company uses a computer to harm a consumer, then the perpetrator is held liable.Now, as statehouses across the country convene for new legislative sessions, and as a new Congress and new Presidential administration take office, the key question in artificial intelligence policy is not whether AI should be regulated, but whether regulation should focus on AI development or AI use.Policymakers will be more successful in protecting consumers if they follow historic principles of technology regulation. To ensure that the technology can achieve its potential and that Little Tech can compete with larger platforms, policy should focus on how AI is used, not how AI is built.Regulating AI models will harm startupsSome lawmakers have concentrated their efforts on regulating the science of AI. They have sought to categorize models based on the math that is used to create them, and then impose layers and layers of compliance requirements on any developer who goes down that path.While larger companies may be able to task dozens of lawyers and engineers to navigate complicated, and sometimes competing, legal frameworks, startups can’t. Startups already face daunting hurdles in their efforts to build AI models that compete with larger platforms: training a model requires massive compute resources, high-level talent, infrastructure, and—beyond technical resources—familiarity with the regulatory environment, to name a few. If lawmakers make it even harder for Little Tech to build AI models, they will give yet another competitive advantage to larger companies. If only a few large companies are able develop AI models, consumers will be left with fewer choices about the AI products they use.Regulating the potentially harmful uses of AI, rather than imposing broad and onerous requirements on the technology’s development, is consistent with the history of technology regulation. In the past, laws have regulated at the application layer–the browsers and websites that users interact with directly–rather than regulating the underlying technical protocols at the core of products and innovation. The Scientific and Advanced Technology Act of 1992 facilitated the internet boom, but didn’t put burdens on the development of TCP/IP, a protocol used for computer networking. Similarly, the protocols underlying websites (HTTP) and email (SMTP) were not saddled with regulatory obligations. Developers were free to build with these technologies, but if a developer, application, or user violated the law, they would be held accountable, regardless of what technology they used to commit the violation. This approach parallels other areas of the law: a person is held liable for murder regardless of the tool they used to commit the crime. If someone uses a hammer to hurt someone, the law holds them to account, but lawmakers don’t create a separate legal regime to dictate how hammers are made.Focus AI policy on protecting consumersRegulating model development is also problematic because it does not directly protect consumers. Creating complex compliance regimes based on the math that an engineer uses to build an AI model will make it harder for Little Tech to build new AI models, but will not change whether a criminal is held liable when they use AI to commit fraud, to violate a person’s civil rights, or to share intimate imagery without consent. Rather than imposing restrictions that slow AI innovation in the hopes of benefitting some people some of the time, policymakers should focus on implementing real protections against illegal and harmful conduct. If policymakers want to protect consumers, they should pass laws that protect consumers.In most cases, existing laws prohibit harmful conduct regardless of how it is undertaken–there are no exceptions in the law for AI. So, to protect people from potential harms of new technology, policymakers should focus on enforcing existing laws in a manner that holds perpetrators accountable for their conduct, whether or not they use AI to achieve it. Governments have a wide range of state and federal laws at their disposal, covering a wide variety of potential harms, from unfair and deceptive trade practices to antitrust, fraud, and civil rights.While prosecuting harms may not require a change in existing law, it might require allocating resources to build the capacity necessary to ensure that the law can be enforced. Prosecutors may need technical training to help them build cases when people misuse AI to commit a crime, for instance. State and federal governments may need to ensure that different agencies can coordinate and share information so that they can understand how AI could be used to violate a particular law. But none of this requires passing new laws that regulate innovation. And in fact, new laws that focus only on regulating model development–rather than strengthening consumer protections–fail to put the key building blocks in place that will help to strengthen enforcement of existing law.Any new laws should be tailored to addressing that evidence-based risk and to ensuring that the benefits of these new laws outweigh their costs, including potential costs to competition. Laws that protect against consumer harm will create a stronger foundation for our AI future than laws that simply burden innovation, making it harder for Little Tech to compete with larger platforms.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI监管 技术政策 消费者保护 创新
相关文章