TechCrunch News 01月14日
OpenAI presents its preferred version of AI regulation in a new ‘blueprint’
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

OpenAI发布了一份名为“经济蓝图”的AI政策文件,呼吁美国政府加大对AI产业的投资,以确保美国在AI领域的领导地位。该蓝图强调了在芯片、数据、能源和人才方面的大力投入,并指出美国在AI监管方面面临挑战,各州政策不统一,联邦政府行动不足。OpenAI还批评了现有政策的不足,并建议政府在能源、数据传输和模型安全方面采取行动,包括开发最佳实践、简化与国家安全机构的合作、以及制定出口管制。此外,该蓝图还探讨了AI版权问题,认为AI应该能利用公开信息进行学习,同时保护创作者的权益。OpenAI希望通过此蓝图影响立法,并已加大游说力度。

💰 OpenAI呼吁美国政府加大对AI产业的投资,包括芯片、数据、能源和人才,以确保美国在AI领域的领导地位。

🏛️ OpenAI认为美国在AI监管方面面临挑战,各州政策不统一,联邦政府行动不足,需要制定统一的联邦政策,并简化与国家安全机构的合作。

💡 OpenAI建议政府在能源和数据传输方面增加投入,支持核能等新能源,以满足AI发展所需的数据中心能源需求,同时开发模型部署的最佳实践。

🔒 OpenAI提出在模型安全方面,需要制定出口管制,确保AI技术与盟友共享,同时限制向对手国家出口,并呼吁政府与供应商共享国家安全相关信息。

⚖️ OpenAI认为AI应能利用公开信息进行学习,包括受版权保护的内容,但同时也要保护创作者的权益,避免未经授权的数字复制。

OpenAI on Monday published what it’s calling an “economic blueprint” for AI: a living document that lays out policies the company thinks it can build on with the U.S. government and its allies.

The blueprint, which includes a forward from Chris Lehane, OpenAI’s VP of global affairs, asserts that the U.S. must act to attract billions in funding for the chips, data, energy, and talent necessary to “win on AI.”

“Today, while some countries sideline AI and its economic potential,” Lehane wrote, “the U.S. government can pave the road for its AI industry to continue the country’s global leadership in innovation while protecting national security.”

OpenAI has repeatedly called on the U.S. government to take more substantive action on AI and infrastructure to support the technology’s development. The federal government has largely left AI regulation to the states, a situation OpenAI describes in the blueprint as untenable.

In 2024 alone, state lawmakers introduced almost 700 AI-related bills, some of which conflict with others. Texas’ Responsible AI Governance Act, for example, imposes onerous liability requirements on developers of open source AI models.

OpenAI CEO Sam Altman has also criticized existing federal laws on the books, such as the CHIPS Act, which aimed to revitalize the U.S. semiconductor industry by attracting domestic investment from the world’s top chipmakers. In a recent interview with Bloomberg, Altman said that the CHIPS Act “[has not] been as effective as any of us hoped,” and that he thinks there’s “a real opportunity” for the Trump administration to “to do something much better as a follow-on.”

“The thing I really deeply agree with [Trump] on is, it is wild how difficult it has become to build things in the United States,” Altman said in the interview. “Power plants, data centers, any of that kind of stuff. I understand how bureaucratic cruft builds up, but it’s not helpful to the country in general. It’s particularly not helpful when you think about what needs to happen for the U.S. to lead AI. And the U.S. really needs to lead AI.”

To fuel the data centers necessary to develop and run AI, OpenAI’s blueprint recommends “dramatically” increased federal spending on power and data transmission, and meaningful buildout of “new energy sources,” like solar, wind farms, and nuclear. OpenAI — along with its AI rivals — has previously thrown its support behind nuclear power projects, arguing that they’re needed to meet the electricity demands of next-generation server farms.

Tech giants Meta and AWS have run into snags with their nuclear efforts, albeit for reasons that have nothing to do with nuclear power itself.

In the nearer term, OpenAI’s blueprint proposes that the government “develop best practices” for model deployment to protect against misuse, “streamline” the AI industry’s engagement with national security agencies, and develop export controls that enable the sharing of models with allies while “limit[ing]” their export to “adversary nations.” In addition, the blueprint encourages that the government share certain national security-related information, like briefings on threats to the AI industry, with vendors, and help vendors secure resources to evaluate their models for risks.

“The federal government’s approach to frontier model safety and security should streamline requirements,” the blueprint reads. “Responsibly exporting … models to our allies and partners will help them stand up their own AI ecosystems, including their own developer communities innovating with AI and distributing its benefits, while also building AI on U.S. technology, not technology funded by the Chinese Communist Party.”

OpenAI already counts a few U.S. government departments as partners, and — should its blueprint gain currency among policymakers — stands to add more. The company has deals with the Pentagon for cybersecurity work and other, related projects, and it has teamed up with defense startup Anduril to supply its AI tech to systems the U.S. military uses to counter drone attacks.

In its blueprint, OpenAI calls for the drafting of standards “recognized and respected” by other nations and international bodies on behalf of the U.S. private sector. But the company stops short of endorsing mandatory rules or edicts. “[The government can create] a defined, voluntary pathway for companies that develop [AI] to work with government to define model evaluations, test models, and exchange information to support the companies safeguards,” the blueprint reads.

The Biden administration took a similar tack with its AI Executive Order, which sought to enact several high-level, voluntary AI safety and security standards. The executive order established the U.S. AI Safety Institute (AISI), a federal government body that studies risks in AI systems, which has partnered with companies including OpenAI to evaluate model safety. But Trump and his allies have pledged to repeal Biden’s executive order, putting its codification — and the AISI — at risk of being undone.

OpenAI’s blueprint also addresses copyright as it relates to AI, a hot-button topic. The company makes the case that AI developers should be able to use “publicly available information,” including copyrighted content, to develop models.

OpenAI, along with many other AI companies, trains models on public data from across the web. The company has licensing agreements in place with a number of platforms and publishers, and offers limited ways for creators to “opt out” of its model development. But OpenAI has also said that it would be “impossible” to train AI models without using copyrighted materials, and a number of creators have sued the company for allegedly training on their works without permission.

“[O]ther actors, including developers in other countries, make no effort to respect or engage with the owners of IP rights,” the blueprint reads. “If the U.S. and like-minded nations don’t address this imbalance through sensible measures that help advance AI for the long-term, the same content will still be used for AI training elsewhere, but for the benefit of other economies. [The government should ensure] that AI has the ability to learn from universal, publicly available information, just like humans do, while also protecting creators from unauthorized digital replicas.”

It remains to be seen which parts of OpenAI’s blueprint, if any, influence legislation. But the proposals are a signal that OpenAI intends to remain a key player in the race for a unifying U.S. AI policy.

In the first half of last year, OpenAI more than tripled its lobbying expenditures, spending $800,000 versus $260,000 in all of 2023. The company has also brought former government leaders into its executive ranks, including ex-Defense Department official Sasha Baker, NSA chief Paul Nakasone, and Aaron Chatterji, formerly the chief economist at the Commerce Department under President Joe Biden.

As it makes hires and expands its global affairs division, OpenAI has been more vocal about which AI laws and rules it prefers, for instance throwing its weight behind Senate bills that would establish a federal rule-making body for AI and provide federal scholarships for AI R&D. The company has also opposed bills, in particular California’s SB 1047, arguing that it would stifle AI innovation and push out talent.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

OpenAI AI政策 政府监管 AI投资 模型安全
相关文章