Fortune | FORTUNE 2024年10月23日
What a global ranking of banks on AI prowess says about the tech’s ‘flywheel’ effect
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文分析了人工智能在金融领域的应用趋势,以Evident Insights的银行AI排名为例,展现了人工智能“飞轮效应”带来的巨大优势。那些早期投入人工智能技术的银行,如摩根大通,通过持续的投资和部署,取得了显著的领先优势,并开始看到投资回报。而其他银行为了追赶,需要付出更大的努力,甚至难以赶超。文章还探讨了人工智能治理的挑战,并指出国际合作对于人工智能的良性发展至关重要。

😄 **摩根大通的领先优势**:摩根大通早在2018年就开始投资人工智能,并建立了先进的人工智能研究实验室。该公司还推出了一个集中式数据和人工智能平台,以加速人工智能模型的训练和部署。摩根大通还积极采用生成式人工智能模型,并推出了一个定制的生成式人工智能工具,供其140,000名员工使用。此外,该公司还要求所有员工完成人工智能课程,以提高员工对人工智能技术的应用能力。摩根大通预计今年将从人工智能部署中获得20亿美元的“商业价值”。

😎 **人工智能“飞轮效应”**:人工智能的“飞轮效应”是指人工智能项目一旦开始产生回报,就会创造一个自我循环和加速的效益循环。这使得那些在人工智能发展初期采取观望态度的企业难以追赶。一些银行,如汇丰银行、加拿大TD银行和摩根士丹利,通过加强人工智能工作、关键人才招聘和与OpenAI和英伟达的合作,成功跻身前十。

🤔 **人工智能治理的挑战**:剑桥大学、牛津大学、莫纳什大学和威奇托大学的研究人员进行了一项名为“智能崛起”的模拟游戏,以探索不同行为体(包括科技公司和政府)在面对日益先进的人工智能发展时,如何应对各自的竞争性激励机制。结果表明,公司之间的商业竞争和国家之间的地缘政治竞争,使得国际合作极其困难。游戏模拟中,间谍活动和网络战被广泛用于窃取彼此的技术。政府政策难以跟上人工智能的快速发展,在许多游戏中,政府玩家诉诸使用武力,试图阻止竞争对手国家获得决定性优势。

🚀 **其他人工智能新闻**:微软推出其人工智能代理;谷歌DeepMind首席执行官兼诺贝尔奖获得者现在也负责Gemini;道琼斯和纽约邮报起诉Perplexity;微软和OpenAI的关系出现裂痕;埃隆·马斯克的xAI推出API;企鹅兰登书屋更新版权语言,禁止人工智能训练;互联网监督机构表示,人工智能生成的儿童性虐待图像正接近“临界点”。

Running harder to stand stillJPMorgan Chase tops Evident’s ranking, as it did last year and the year before. In fact, all of the top four—which is rounded out by Capital One, Royal Bank of Canada, and Wells Fargo—have maintained their positions from last year’s ranking. But this fact belies what is actually going on, Alexandra Mousavizadeh, CEO of Evident Insights, tells me. Most of the banks in the index improved their overall scores, and the average scores have climbed significantly over time. NatWest, the 18th bank in the Index, for example, scored more points this year than the number 10 bank in last year’s ranking.“The leaders are leading more, but the pace of growth has doubled since last year,” Mousavizadeh says. Those that made the investment to have their data cleaned up and ready for AI applications and that have invested in hiring AI talent and deploying AI solutions are moving much faster than those who are further behind on these tasks.AI’s ‘Flywheel’This is evidence of AI’s “flywheel” effects. And it’s why those companies that have wanted to take a wait-and-see approach to the AI boom, wary of reports about how difficult it is to generate return on investment from AI projects, may be making a big mistake. It can take time and significant investment for AI projects to begin to pay off, but once they do, these AI deployments can create a self-perpetuating and accelerating cycle of benefits. That flywheel effect means that it can be impossible for late-movers to ever close the gap.Or nearly impossible. A few banks have managed to jump up in the rankings this year—HSBC, Canada’s TD Bank, and Morgan Stanley all managed to break into the top 10 for the first time. In the case of HSBC, tying its AI efforts more tightly together and making some key hires helped, Mousavizadeh says. For Morgan Stanley, partnerships with OpenAI and Nvidia helped boost its position.But JPMorgan Chase remains well ahead of its peers largely because it started investing in AI much earlier than others. It hired Manuela Veloso, a top machine learning researcher from Carnegie Mellon University, back in 2018 and stood up its own advanced AI research lab. In 2019, its then-chief data and analytics officer championed a centralized data and AI platform to move information into its own AI models much faster than it could before. It was an early adopter of generative AI models too and is now pushing a bespoke generative AI tool out to 140,000 employees. It is also making all its employees complete an AI course designed to equip them to use the technology effectively. Critically, it says it’s starting to see value from this investment—and unlike most companies, it is putting some hard numbers against that claim. The company is currently projecting it will see $2 billion of “business value” from AI deployments this year.Putting numbers behind ROIWhile “business value” may still seem a bit wishy-washy—it’s not exactly as concrete a term as ROI, after all—putting actual dollar figures out there matters, Mousavizadeh says. That’s because once a bank puts numbers out, financial analysts, investors, and regulators will push for further transparency into those numbers and also hold the bank accountable for meeting them. That, in turn, should up the pressure on other global banks to start doing the same. (One other bank, DBS, has said it had seen $370 million in “economic value” from a combination of additional revenue, cost savings, and risk avoidance, thanks to AI.)While currently Evident Insights only ranks financial institutions, these patterns—with today’s winners, continuing to win, and increasingly publishing real stats—will likely be repeated in other industries, too. Those waiting on the sidelines for AI to mature or prove itself may find that by the time the evidence of ROI is clear, it is already too late to act.With that, here’s more AI news.Jeremy Kahnjeremy.kahn@fortune.com@jeremyakahnAI IN THE NEWSMicrosoft rolls out its AI agents. The software giant has begun making its first set of AI “agents” widely available to customers, a few weeks after rival Salesforce also made a big push into the world of AI systems that can perform tasks for users. Microsoft’s first agents can qualify sales leads, communicate with suppliers, or understand customer intent. Some of these agents work within Microsoft’s Github Copilot while others work within its Dynamics 365 application, allowing customers to build custom agents, Axios reported.Google DeepMind chief and Nobel laureate now runs Gemini too. Google has put Demis Hassabis, the CEO of Google DeepMind and the guy who just shared the Nobel Prize in Chemistry for DeepMind’s work on AI models that can predict protein structures, in charge of its Gemini AI products, Axios reported. Sissie Hsiao, the executive who heads Gemini, will now report to Hassabis. She had been reporting to Prabhakar Raghavan, who had been leading the company’s core search and ad businesses, but has now moved to a new role as the company’s chief technologist. Nick Fox, a long-time Google executive, is taking over Raghavan’s former role, minus Gemini.Dow Jones, New York Post sue Perplexity. The two Rupert Murdoch-owned media organizations have filed a suit against generative AI search engine Perplexity alleging that the startup has illegally copied its copyrighted content without permission and then profited from it through its search tool, Reuters reported. The suit calls out Perplexity for both inventing false information and wrongly attributing it to the news organizations and for copying phrases verbatim from the news org, but attributing that content to other sources. Perplexity did not immediately comment on the lawsuit. (Full disclosure: Fortune has a partnership with Perplexity.)Microsoft OpenAI relationship fraying. That’s according to a story in the New York Times, which cited multiple unnamed sources at both companies. OpenAI has been frustrated that Microsoft, which has already invested at least $13 billion into OpenAI and provided the computing power to train its powerful AI models, has declined to provide additional funds and even greater levels of computing resources over the past year, the paper reported. It also said OpenAI has chafed at the role played by Mustafa Suleyman, the DeepMind cofounder and former Google executive, that Microsoft hired in March to lead a new Microsoft AI division.Elon Musk’s xAI launches an API. The billionaire’s AI company has launched an application programming interface (API) that will let developers integrate its Grok AI model into their software, TechCrunch reports. The API offers access to Grok's multimodal language capabilities. Rival AI companies, such as OpenAI and Anthropic, already provide businesses access to their models through similar APIs.Penguin Random House updates copyright language to forbid AI training. The major publishing house became the first of the “Big Five” publishers to update its copyright page language to forbid the use of its books for the training of AI models without express permission, trade publication The Bookseller reported. There are several lawsuits pending against AI companies by authors claiming their copyright was violated when the tech companies used their books, without permission, to train AI systems.Internet watchdog says AI-generated child sexual abuse imagery approaching ‘a tipping point.’ The Internet Watch Foundation is ringing alarm bells over a sharp increase in AI-generated child sexual abuse imagery being found on the public internet, The Guardian reports. In the past, most such imagery was hidden on the dark web. The AI-generated images are often indistinguishable from real photos, complicating the job of nonprofits and law enforcement agencies working to prevent and prosecute child sexual abuse.EYE ON AI RESEARCHWhat a “war game” exercise tells us about the prospects for international AI governance. That the prospects for effective international governance are pretty darn poor, it turns out. Researchers from several different institutes at the University of Cambridge, University of Oxford, Monash University, and the University of Wichita, analyzed the results of a simulation game called “Intelligence Rising” that was designed to explore how various actors—tech companies and governments among them—would respond to the development of increasingly advanced artificial intelligence, given their various competing incentives. The analysis showed that usually the commercial arms race between companies combined with the geopolitical race between countries to make international cooperation extremely difficult.The findings are a sobering look at how the quest for AI supremacy will likely play out. For instance, espionage and cyberwarfare were widely used in the simulations by various players to try to steal technology from one another. Partly as a result, players would often achieve advanced AI at exactly the same time, leading to multipolar geopolitical and market dynamics. Companies often had every incentive to push for safeguards externally—which might constrain their rivals—while secretly relaxing them internally in order to gain a competitive edge. Cooperation among companies often had to be dictated by governments and yet government policy struggled to keep pace with rapid AI progress. Meanwhile, in many of the games, government players resorted to the use of military force to try to prevent a rival nation from gaining a decisive advantage (this often involved a Chinese invasion of Taiwan designed to disrupt the supply of advanced computer chips to the U.S. tech companies). In sum, the results of the war games don’t give one much hope for our collective ability to put international safeguards around powerful AI systems. You can read the study here on arxiv.org.FORTUNE ON AIExclusive: Waymo engineering exec discusses self-driving AI models that will power the cars into new cities —by Sharon GoldmanInvestors pour into photonics startups to stop data centers from hogging energy and to speed up AI —by Jeremy KahnLuminance debuts AI assistant for lawyers that’s aimed at doing some of the legal grunt work —by Jenn BriceTikTok parent confirms it fired rogue intern for tampering with its AI —by Sasha RogelbergAI CALENDAROct. 28-30: Voice & AI, Arlington, Va.Nov. 19-22: Microsoft Ignite, ChicagoDec. 2-6: AWS re:Invent, Las VegasDec. 8-12: Neural Information Processing Systems (Neurips) 2024, Vancouver, British ColumbiaDec. 9-10: Fortune Brainstorm AI, San Francisco (register here)BRAIN FOODCould AI actually help improve our democracy? As we move closer to the U.S. presidential election in two weeks, most people’s focus is understandably on AI’s potential to spread misinformation and aid election interference efforts. These are very real concerns and we should be thinking about ways we can eliminate these risks going forward (unfortunately it's too late to do much about this election). But could AI also help enhance our democracy?Researchers from Google DeepMind published fascinating research in the journal Science this week about what they dubbed a “Habermas Machine” (named after the political philosopher Juergen Habermas). The “Habermas Machine” was an AI model trained to take in opinions from individuals, summarize these in group statements, and then act as a mediator, helping the individuals move toward a group statement that the majority of participants would find acceptable. In tests with 5,000 participants drawn from a demographically representative sample of the U.K. population, the statements the AI model generated were judged to be more acceptable to more individuals than those developed by professional human moderators.The idea is that an AI model like this could help run a citizens’ assembly, acting as a moderator, and helping a group reach consensus. Some see citizens’ assemblies such as this as a way to overcome political polarization and find common ground, and also to allow citizens a more direct input into policymaking than they typically have with other forms of representative government. 

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 飞轮效应 银行 摩根大通 治理
相关文章