TechCrunch News 01月25日
AI companies upped their federal lobbying spend in 2024 amid regulatory uncertainty
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

去年美国企业在联邦层面针对AI问题的游说投入大幅增加。数据显示,2024年648家公司参与,较2023年增长141%。微软等支持相关立法,OpenAI等也有所行动。一些AI实验室游说支出增多,同时国内AI政策制定情况复杂。

📈2024年648家公司参与AI游说,比2023年增长141%

💪微软等支持相关立法,如CREATE AI Act等

💰OpenAI等AI实验室游说支出增加,如OpenAI从26万美元增至176万美元

📜国内AI政策制定复杂,州级层面提出众多法案

Companies spent significantly more lobbying AI issues at the U.S. federal level last year compared to 2023 amid regulatory uncertainty.

According to data compiled by OpenSecrets, 648 companies spent on AI lobbying in 2024 versus 458 in 2023, representing a 141% year-over-year increase.

Companies like Microsoft supported legislation such as the CREATE AI Act, which would support the benchmarking of AI systems developed in the U.S. Others, including OpenAI, put their weight behind the Advancement and Reliability Act, which would set up a dedicated government center for AI research.

Most AI labs — that is, companies dedicated almost exclusively to commercializing various kinds of AI tech — spent more backing legislative agenda items in 2024 than in 2023, the data shows.

OpenAI upped its lobbying expenditures to $1.76 million last year from $260,000 in 2023. Anthropic, OpenAI’s close rival, more than doubled its spend from $280,000 in 2023 to $720,000 last year, and enterprise-focused startup Cohere boosted its spending to $230,000 in 2024 from just $70,000 two years ago.

Both OpenAI and Anthropic made hires over the last year to coordinate their policymaker outreach. Anthropic brought on its first in-house lobbyist, Department of Justice alum Rachel Appleton, and OpenAI hired political veteran Chris Lehane as its new VP of policy.

All told, OpenAI, Anthropic, and Cohere set aside $2.71 million combined for their 2024 federal lobbying initiatives. That’s a tiny figure compared to what the larger tech industry put toward lobbying in the same timeframe ($61.5 million), but more than four times the total that the three AI labs spent in 2023 ($610,000).

TechCrunch reached out to OpenAI, Anthropic, and Cohere for comment but did not hear back as of press time.

Last year was a tumultuous one in domestic AI policymaking. In the first half alone, Congressional lawmakers considered more than 90 AI-related pieces of legislation, according to the Brennan Center. At the state level, over 700 laws were proposed.

Congress made little headway, prompting state lawmakers to forge ahead. Tennessee became the first state to protect voice artists from unauthorized AI cloning. Colorado adopted a tiered, risk-based approach to AI policy. And California Governor Gavin Newsom signed dozens of AI-related safety bills, a few of which require AI companies to disclose details about their training.

No state officials were successful in enacting AI regulation as comprehensive as international frameworks like the EU’s AI Act, however.

After a protracted battle with special interests, Governor Newsom vetoed bill SB 1047, which would have imposed wide-ranging safety and transparency requirements on AI developers. Texas’ TRAIGA bill, which is even broader in scope, may suffer the same fate once it makes its way through the statehouse.

It’s unclear whether the federal government can make more progress on AI legislation this year versus last, or even whether there’s a strong appetite for codification. President Donald Trump has signaled his intention to largely deregulate the industry, clearing what he perceives to be roadblocks to U.S. dominance in AI.

During his first day in office, Trump revoked an executive order by former President Joe Biden that sought to reduce risks AI might pose to consumers, workers, and national security. On Thursday, Trump signed an EO instructing federal agencies to suspend certain Biden-era AI policies and programs, potentially including export rules on AI models.

In November, Anthropic called for “targeted” federal AI regulation within the next 18 months, warning that the window for “proactive risk prevention is closing fast.” For its part, OpenAI in a recent policy doc called on the U.S. government to take more substantive action on AI and infrastructure to support the technology’s development.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI游说 美国企业 AI政策 立法支持
相关文章