Fortune | FORTUNE 2024年10月24日
OpenAI suffers departure of yet another AI safety expert, and fresh claims around copyright infringement
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

OpenAI多位研究人员离职,被指存在违反版权法等问题。公司文化、方法及对AI安全的态度引发关注,其从研究机构向盈利方向的转变带来诸多争议。

OpenAI的AI安全研究员Miles Brundage离职,称公司对其发表研究设限,且AI公司普遍未给予AI安全应有的重视,存在多种问题导致走捷径行为。

OpenAI今年出现一系列高层离职,多人与公司对AI安全的立场转变有关,且其AGI准备团队被解散,部分人员重新调配。

Suchir Balaji称OpenAI在训练模型时违反版权法,认为ChatGPT等对社会弊大于利,相关版权诉讼的结果可能影响生成式AI的未来经济及OpenAI等公司的前景。

OpenAI称其使用公开数据构建AI模型的方式受公平使用等原则保护,但Balaji的干预可能对相关版权诉讼产生影响。

OpenAI has lost another long-serving AI safety researcher and been hit by allegations from another former researcher that the company broke copyright law in the training of its models. Both cases raise serious questions about OpenAI’s methods, culture, direction, and future. On Wednesday, Miles Brundage—who had currently being leading a team charged with thinking about policies to help both the company and society at large prepare the advent of “artificial general intelligence” or AGI—announced he was departing the company on Friday after more than six years so he could continue his work with fewer constraints.In a lengthy Substack post, Brundage said OpenAI had placed increasingly restrictive limits on what he could say in published research. He also said that, by founding or joining an AI policy non-profit, he hoped to become more effective in warning people of the urgency around AI’s dangers, as “claims to this effect are often dismissed as hype when they come from industry.”“The attention that safety deserves”Brundage’s post did not take any overt swipes at his soon-to-be-former employer—indeed, he listed CEO Sam Altman as one of many people who provided “input on earlier versions of this draft”—but it did complain at length about AI companies in general “not necessarily [giving] AI safety and security the attention it deserves by default.”“There are many reasons for this, one of which is a misalignment between private and societal interests, which regulation can help reduce. There are also difficulties around credible commitments to and verification of safety levels, which further incentivize corner-cutting,” Brundage wrote. “Corner-cutting occurs across a range of areas, including prevention of harmfully biased and hallucinated outputs as well as investment in preventing the catastrophic risks on the horizon.”Brundage’s departure extends a string of high-profile resignations from OpenAI this year—including its Mira Murati, its chief technology officer, as well Ilya Sutskever, a cofounder of the company and its former chief scientist—many of which were either explicitly or likely related to the company’s shifting stance on AI safety. Brundage’s departure extends a string of high-profile resignations from OpenAI this year—including its Mira Murati, its chief technology officer, as well as Ilya Sutskever, a co-founder of the company and its former chief scientist—many of which were either explicitly or likely related to the company’s shifting stance on AI safety. OpenAI was initially founded as a research house for the development of safe AI, but over time the need for hefty outside funding—it recently raised a $6.6 billion round at a $157 billion valuation—has gradually tilted the scales towards its for-profit side, which is likely to soon formally become OpenAI’s dominant structural component. Co-founders Sutskever and John Schulman both left OpenAI this year to boost their focuses on safe AI. Sutskever founded his own company and Schulman joined OpenAI arch-rival Anthropic, as did Jan Leike, a key colleague of Sutskever’s who declared that “over the past years, safety culture and processes [at OpenAI] have taken a backseat to shiny products.” Already by August, it had become clear that around half of OpenAI’s safety-focused staff had departed in recent months—and that was before the dramatic exit of Murati, who frequently found herself having to adjudicate arguments between the firm’s safety-first researchers and its more gung-ho commercial team, as Fortune reported. For example, OpenAI’s staffers were given just nine days to test the safety of the firm’s powerful GPT4-o mode before its launch, according to sources familiar with the situation.In further sign that OpenAI’s shifting safety focus, Brundage said that the AGI Readiness team he led is being disbanded, with its staff being “distributed among other teams.” Its economic research sub-team is becoming the responsibility of new OpenAI chief economist Ronnie Chatterji, he said. He did not specify how the other staff were being redeployed.It is also worth noting that Brundage is not the first person at OpenAI to face problems over the research they wish to publish. After last year’s dramatic and short-lived ouster of Altman by OpenAI’s safety-focused board, it emerged that Altman had previously laid into then-board-member Helen Toner because she co-authored an AI safety paper that implicitly criticized the company. Unsustainable modelConcerns about OpenAI’s culture and method were also heightened by another story on Wednesday. The New York Times carried a major piece on Suchir Balaji, an AI researcher who spent nearly four years at OpenAI before leaving in August.Balaji says he left because he realized that OpenAI was breaking copyright law in the way it trained its models on copyrighted data from the web, and because he decided that chatbots like ChatGPT were more harmful than beneficial for society.Again, OpenAI’s transmogrification from research outfit to money-spinner is central here. “With a research project, you can, generally speaking, train on any data. That was the mind-set at the time,” Balaji told the Times. Now he claims that AI models threaten the commercial viability of the businesses that generated that data in the first place, saying: “This is not a sustainable model for the internet ecosystem as a whole.”OpenAI and many of its peers have been sued by copyright holders over that training, which involved copying seas of data so that the companies’ systems could ingest and learn from it. Those AI models are not thought to contain whole copies of the data as such, and they rarely output close copies in response to users’ prompts—it’s the initial, unauthorized copying that the suits are generally targeting.The standard defense in such cases is for companies accused of violating copyright to argue that the way they are using copyrighted works should constitute “fair use”—that copyright was not infringed because the companies transformed the copyrighted works into something else, in a non-exploitative way, used them in a way that did not directly compete with the original copyright holders or prevent them from possibly exploiting the work in a similar fashion, or served the public interest. The defense is easier to apply to non-commercial use cases—and is always decided by judges on a case by case basis.In a Wednesday blog post, Balaji dove into the relevant U.S. copyright law and assessed how its tests for establishing “fair use” related to OpenAI’s data practices. He alleged that the advent of ChatGPT had negatively affected traffic to destinations like the developer Q&A site Stack Overflow, saying ChatGPT’s output could in some cases substitute for the information found on that site. He also presented mathematical reasoning that, he claimed, could be used to determine links between an AI model’s output and its training data.Balaji is a computer scientist and not a lawyer. And there are plenty of copyright lawyers who do think a fair use defense of using copyrighted works in the training of AI models should be successful. However, Balaji’s intervention will no doubt catch the attention of the lawyers representing the publishers and book authors that have sued OpenAI for copyright infringement. It seems likely that his insider analysis will end up playing some role in these cases, the outcome of which could determine the future economics of generative AI, and possibly the futures of companies such as OpenAI.It is rare for AI companies’ employees to go public with their concerns over copyright. Until now, the most significant case has probably been that of Ed Newton-Rex, who was head of audio at Stability AI before quitting last November with the claim that “today’s generative AI models can clearly be used to create works that compete with the copyrighted works they are trained on, so I don’t see how using copyrighted works to train generative AI models of this nature can be considered fair use.”“We build our AI models using publicly available data, in a manner protected by fair use and related principles, and supported by longstanding and widely accepted legal precedents,” an OpenAI spokesperson said in a statement. “We view this principle as fair to creators, necessary for innovators, and critical for U.S. competitiveness.”“Excited to follow its impact”Meanwhile, OpenAI’s spokesperson said Brundage’s “plan to go all-in on independent research on AI policy gives him the opportunity to have an impact on a wider scale, and we are excited to learn from his work and follow its impact.”“We’re confident that in his new role, Miles will continue to raise the bar for the quality of policymaking in industry and government,” they said.Brundage had seen the scope of his job at OpenAI narrowed over his career with the company, going from the development of AI safety testing methodologies and research into current national and international governance issues related to AI to an exclusive focus on the the handling a potential superhuman AGI, rather than AI’s near-term safety risks.Meanwhile, OpenAI has hired a growing cast of heavy-hitting policy experts, many with extensive political, national security, or diplomatic experience, to head teams looking at various aspects of AI governance and policy. It hired Anna Makanju, a former Obama administration national security official who had worked in policy roles at SpaceX’s Starlink and Facebook, to oversee its initial outreach to government officials both in Washington, D.C., and around the globe. She is currently OpenAI’s vice president of global impact. More recently, it brought in veteran political operative Chris Lehane, who had also been in a communications and policy role at Airbnb, to be its vice president of global affairs. Chatterji, who is taking over the economics team that formerly reported to Brundage, formerly worked in various advisory roles in President Joe Biden’s and President Barack Obama’s White Houses and also served as chief economist at the Department of Commerce. It is not uncommon at fast-growing technology companies to see early employees have their roles circumscribed by the later addition of senior staff. In Silicon Valley, this is often referred to as “getting layered.” And, although it is not explicitly mentioned in Brundage’s blog post, it may be that the loss of his economic unit to Chatterji, coming after the previous loss of some of his near-term AI policy research to Makanju and Lehane, was a final straw. Brundage did not immediately respond to requests to comment for this story. Brundage used his post to set out the issues on which he will now focus. These include: assessing and forecasting AI progress; the regulation of frontier AI safety and security; AI’s economic impacts; the acceleration of positive use cases for AI; policy around the distribution of AI hardware; and the high-level “overall AI grand strategy.”He warned that “neither OpenAI nor any other frontier lab” was really ready for the advent of AGI, nor was the outside world. “To be clear, I don’t think this is a controversial statement among OpenAI’s leadership,” he stressed, before arguing that people should still go work at the company as long as they “take seriously the fact that their actions and statements contribute to the culture of the organization, and may create positive or negative path dependencies as the organization begins to steward extremely advanced capabilities.”Brundage noted that OpenAI had offered him funding, compute credits, and even early model access, to aid his upcoming work.However, he said he still hadn’t decided whether to take up those offers, as they “may compromise the reality and/or perception of independence.”

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

OpenAI AI安全 版权问题 人员离职
相关文章