Fortune | FORTUNE 2024年10月17日
OpenAI is quietly pitching its products to the U.S. military and national security establishment
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

OpenAI近期出现高管和人才变动,且似乎正涉足国防和军事合同领域。它移除了禁止产品用于军事的政策条款,与相关方合作并可能参与军事项目。这引发了诸多争议,包括对AI用于军事目的的担忧及道德思考等。

💥OpenAI近期出现人才变动,如Dane Stuckey加入担任首席信息安全官。同时,OpenAI似乎在转向国防和军事合同领域,如移除禁止产品用于军事的政策条款,并与政府承包商合作。

🚫对于OpenAI的军事动向,人们存在诸多担忧。如Palantir因涉及军事合同等备受争议,而OpenAI与Palantir等有合作,且其GPT-4模型用于相关服务,这引发了对AI用于军事的风险和道德问题的讨论。

💡许多人认为AI用于战争和军事目的是最具争议的,如前谷歌CEO将AI的到来与核武器的出现相提并论,且AI模型的已知偏差和编造信息倾向也带来风险。

OpenAI has been bleeding executives and top talent, but this week, it made a big hire. Well, a couple of them. In Tuesday’s newsletter, Jeremy covered the hiring of prominent Microsoft AI researcher Sebastian Bubeck. But today, I want to talk about a different hire this week: Dane Stuckey announced on X that he’s joining the company as its newest chief information security officer (CISO) after a decade at Palantir, where he worked on the information security team and was most recently CISO. For many in the tech world, any mention of Palantir raises red flags. The secretive firm— cofounded by Peter Thiel and steeped in military contracts—has garnered intense scrutiny over the years for its surveillance and predictive policing technologies, taking up the controversial Project Maven that inspired walk-outs at Google, and its long-running contract with U.S. Immigration and Customs Enforcement (ICE) to track undocumented immigrants. Taken by itself, Stuckey’s hiring could just be that—a new hire. But it comes as OpenAI appears to be veering into the world of defense and military contracts.   OpenAI’s military momentIn January, OpenAI quietly removed language from its usage policies that prohibited the use of its products for “military and warfare.” A week later, it was reported that the company was working on software projects for the Pentagon. More recently, OpenAI partnered with  Carahsoft, a government contractor that helps the government buy services from private companies quickly and with little administrative burden, with hopes to secure work with the Department of Defense, according to Forbes.Meanwhile, this week Fortune’s Kali Hays reported this week that the Department of Defense has 83 active contracts with various companies and entities for generative AI work, with the amounts of each contract ranging from $4 million to $60 million. OpenAI was not specifically named amoung the contractors, but its work may be obscured through partnerships with other firms that are listed as the primary contractor.OpenAI’s GPT-4 model was at the center of a recent partnership between Microsoft, Palantir, and various U.S. defense and intelligence agencies. The entities joined in August to make a variety of AI and analytics services available to U.S. defense and intelligence agencies in classified environments.With all the debates around how AI should and should not be used, the use for war and military purposes is easily the most controversial. Many, such as former Google CEO and prominent defense industry figure Eric Schmidt, have compared the arrival of AI to the advent of nuclear weapons. Advocacy groups have warned about the risks—especially considering the known biases in AI models and their tendencies to make up information. And many have mused over the morality of autonomous weapons, which could take lives without any human input or direction. The big pictureThese types of pursuits have proven to be a major flash point for tech companies. In 2018, thousands of Google employees protested the company’s pursuit of a Pentagon contract known as Project Maven, fearing the technology they create would be used for lethal purposes and arguing they didn’t sign up to work with the military. While OpenAI has maintained it will still prohibit use of its technologies for weapons, we’ve already seen that it’s a slippery slope. The company is not only allowing, but also seeking out military uses it forbade this time last year. Plus, there are many concerning ways models could be used to directly support deadly military operations that don’t involve them functioning directly as weapons. There’s no telling if the march of exits from OpenAI this year is directly related in any part to its military ambitions. While some who left stated concerns over safety, most offered only boiler plate fodder around pursuing new opportunities in their public resignations. What’s clear, however, is that the OpenAI of 2024 and the foreseeable future is a very different company than the one they joined years ago. Now, here’s more AI news. Sage Lazzarosage.lazzaro@consultant.fortune.comsagelazzaro.comAI IN THE NEWSAWS invests $500 million in nuclear to power AI. Amazon’s cloud computing unit is pursuing three nuclear projects in Virginia and Washington state, including an agreement with Dominion Energy, Virginia’s utility company, to build a smaller more advanced type of nuclear reactor known as an SMR. The company joins other tech giants including Google and Microsoft that are investing in nuclear to power their energy-intensive generative AI services. Dominion projects power demand will increase by 85% over the next 15, CNBC reported. Mistral unveils AI models designed to run on laptops and phones. The new family of two models, called Les Ministraux, can be used for basic tasks like generating text or could be linked up the the startups more powerful models to offer more use cases. In a blog post, Mistral positions the models as meeting customer requests’ for ‘internet-less chatbots” and “local, privacy-first inference for critical applications.” Head of Open Source Initiative criticizes Meta’s cooption of the term “open source.” Stefano Maffulli, head of the Open Source Initiative, an organization that coined the term open-source software in the 1990s and is seen as the protector of the term’s meaning and intent, told the Financial Times that Meta was confusing the public and “polluting” the concept of open source by labeling its freely available AI models “open source.” The licensing terms of these models restrict some use cases and Meta has not been fully transparent about the training methods or datasets used to create its Llama family of models.FORTUNE ON AIStartup that wants to be the eBay for AI data taps Google vets and a top IP lawyer for key roles—By Jeremy Kahn‘Godmother of AI’ wants everyone to have a place in the tech transformation—By Jenn Brice"Why the e.l.f. not?’ The beauty brand built an AI model to write social media comments—By Jenn BriceAmazon gadget boss hints at ‘awesome’ future Alexa products and unveils a slew of new Kindle devices in his public debut —By Jason Del ReyAI CALENDAROct. 22-23: TedAI, San FranciscoOct. 28-30: Voice & AI, Arlington, Va.Nov. 19-22: Microsoft Ignite, ChicagoDec. 2-6: AWS re:Invent, Las VegasDec. 8-12: Neural Information Processing Systems (Neurips) 2024, Vancouver, British ColumbiaDec. 9-10: Fortune Brainstorm AI, San Francisco (register here)EYE ON AI NUMBERS0.75That’s the average score—on a scale of 0 to 1—given to AI models developed by Alibaba, Anthropic, OpenAI, Meta, and Mistral in an assessment to test them for compliance with the EU AI Act, according to data published in Reuters. The tests were performed by Swiss startup LatticeFlow AI and its partners at two research institutes. They examined the models across 12 categories, such as technical robustness and safety. EU officials are supporting use of the LatticeFlow tool as they try to figure out how to monitor compliance. While 0.75 was the rough average score across the various models and categories, there were plenty of lower scores in specific categories. OpenAI's GPT-3.5 Turbo received a score of 0.46 in the assessment measuring discriminatory output, while Mistral's 8x7B Instruct model received 0.38 in security tests for prompt injection attacks. Anthropic received the highest overall average score—0.89.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

OpenAI 军事合同 AI风险 道德问题
相关文章