Fortune | FORTUNE 07月23日 01:29
What Eric Xing’s Abu Dhabi project says about the next phase of AI power
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文记录了作者与MBZUAI校长Eric Xing的对话,探讨了当前AI领域的关键议题。Xing教授将MBZUAI定位为“Bell Labs plus a university”,旨在结合前沿研究与应用挑战。他认为阿联酋正利用开放的AI研究作为建立软实力的战略工具,并强调美国在AI领域的领先地位。Xing教授对“世界模型”等热门概念持审慎态度,认为AI应注重因果推理而非仅仅模仿。此外,文章还提及了白宫关于AI出口的新策略、OpenAI与Google DeepMind在数学竞赛中的争议,以及AI研究透明度等重要信息。

🌟 MBZUAI的定位与愿景:Eric Xing教授将MBZUAI描述为“Bell Labs plus a university”,旨在成为全球顶尖的AI研究机构,同时承担实际应用研究挑战。该校主要招收研究生,致力于在五年内快速发展,并与MIT、卡内基梅隆等精英学府竞争,并积极解决应用研究难题。

🇦🇪 阿联酋的AI软实力战略:Xing教授认为,MBZUAI是阿联酋构建AI领域软实力的重要组成部分。他将该国描述为中东地区与美国保持一致的“强大岛屿”,并将大学视为推广美国式研究规范(如开源、知识自由和科学透明)的“大使中心”。他强调,若美国不积极推广此类机构,其他国家将可能主导AI发展方向。

🇺🇸 美国在AI领域的优势:Xing教授驳斥了关于“AI战争”的说法,认为美国在思想、人才和创新环境方面遥遥领先于中国。他指出,尽管许多顶尖AI工程师可能拥有中国血统,但他们在美学习和工作后才达到顶尖水平。他认为中国的AI生态系统受到审查、硬件限制和较弱的自下而上创新文化的制约。

🌐 开源的战略意义:Xing教授将开源视为一项战略选择,而非仅仅是哲学偏好。在MBZUAI,他积极推动开放研究和开源AI开发,旨在使全球研究者,特别是美国和中国权力中心之外的群体,能够更平等地获取前沿工具。他认为,在AI日益被企业壁垒化的当下,MBZUAI的开放模式有助于培养全球人才,促进科学理解,并提升阿联酋作为负责任AI发展中心的信誉。

🤔 对“世界模型”的审慎看法:Xing教授对当前AI领域流行的“世界模型”概念表示怀疑,认为许多所谓的“世界模型”不过是视频生成器,而非真正的推理或模拟系统。他强调,真正的世界模型应超越视觉表现,帮助AI理解因果关系,而不仅仅是预测视频的下一帧,即AI需要理解世界而非仅仅模仿它。

I was excited and curious to meet Eric Xing last week in Vancouver, where I was attending the International Conference on Machine Learning—one of the top AI research gatherings of the year. Why? Xing, a longtime Carnegie Mellon professor who moved to Abu Dhabi in 2020 to lead the public, state-funded Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), sits at the crossroads of nearly every big question in AI today: research, geopolitics, even philosophy.

The UAE, after all, has quietly become one of the most intriguing players in the global AI race. The tiny Gulf state is aligning itself with U.S.-style norms around intellectual freedom and open research—even as the AI rivalry between the U.S. and China becomes increasingly defined by closed ecosystems and strategic competition. The UAE isn’t trying to “win” the AI race, but it wants a seat at the table. Between MBZUAI and G42–its state-backed AI-focused conglomerate–the UAE is building AI infrastructure, investing in talent, and aggressively positioning itself as a go-to partner for American firms like OpenAI and Oracle. And Xing is at the heart of it. 

As it happened, Xing and I just missed each other—he arrived in Vancouver as I was heading home—so we connected on Zoom the following day. Our conversation ranged widely, from the hype around “world models” to how the UAE is using open-source AI research as a strategic lever to build soft power. Here are a few of the most compelling takeaways:

A ‘Bell Labs plus a university

MBZUAI is just five years old, but Xing says it’s already among the fastest-growing academic institutions in the world. The school, which is mostly a graduate program for AI researchers, aspires to compete with elite institutions like MIT and Carnegie Mellon while also taking on applied research challenges. Xing calls it a hybrid organization, similar to “Bell Labs plus a university,” referring to the legendary R&D arm of AT&T, founded in 1925 and responsible for foundational innovations that shaped modern computing, communications, and physics. 

The UAE as a soft-power AI ambassador

Xing sees MBZUAI not just as a university, but as part of the UAE’s broader effort to build soft power in AI. He describes the country as a “strong island” of U.S. alignment in the Middle East, and views the university as an “ambassador center” for American-style research norms: open source, intellectual freedom, and scientific transparency. “If the U.S. wants to project influence in AI, it needs institutions like this,” he told me. “Otherwise, other countries will step in and define the direction.”

The U.S. isn’t losing the AI race

While much of the public narrative around AI focuses on a U.S.-China race, Xing doesn’t buy the framing. “There is no AI war,” he said flatly. “The U.S. is way ahead in ideas, in people, and in the innovation environment.” In his view, China’s AI ecosystem is still constrained by censorship, hardware limitations, and a weaker bottom-up innovation culture. “Many top AI engineers in the U.S. may be of Chinese origin,” he said, “but they only became top engineers after studying and working in the U.S.”

Why open source matters 

For Xing, open source isn’t just a philosophical preference—it’s a strategic choice. At MBZUAI, he’s pushing for open research and open-source AI development as a way to democratize access to cutting-edge tools, especially for countries and researchers outside the U.S.-China power centers. “Open source applies pressure on closed systems,” he told me. “Without it, fewer people would be able to build with—or even understand—these technologies.” At a time when much of AI is becoming siloed behind corporate walls, Xing sees MBZUAI’s open approach as a way to foster global talent, advance scientific understanding, and build credibility for the UAE as a hub for responsible AI development.

On ‘world models’ and AI hype

Xing didn’t hold back when it came to one of the buzziest trends in AI right now: so-called “world models”—systems that aim to help AI agents learn by simulating how the world works. He’s skeptical of the hype. “Right now people are building pretty video generators and calling them world models,” he said. “That’s not reasoning. That’s not simulation.” In a recent paper he spent months writing himself—unusual for someone of his seniority—he argues that true world models should go beyond flashy visuals. They should help AI reason about cause and effect, not just predict the next frame of a video. In other words: AI needs to understand the world, not just mimic it.

With that, here’s the rest of the AI news—including that tomorrow the White House is set to release a sweeping new AI strategy aimed at boosting the global export of U.S. AI technologies while cracking down on state-level regulations that are seen as overly restrictive. I will be attending the D.C. event, which includes a keynote by President Trump, and will report back.

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman

AI IN THE NEWS

White House to unveil plan to push global export of U.S. AI and crack down on restrictions. According to a draft seen by Reuters, the White House is set to release a sweeping new AI strategy Wednesday aimed at boosting the global export of U.S. AI technologies while cracking down on state-level regulations seen as overly restrictive. The plan will bar federal AI funding from states with tough AI laws, promote open-source and open-weight AI development, and direct the Commerce Department to lead overseas data center and deployment efforts. It also tasks the FCC with reviewing potential conflicts between federal goals and local rules. Framed as a push to make “America the world capital in artificial intelligence,” the plan reflects President Trump’s January directive and will be unveiled during a “Winning the AI Race” event co-hosted by the All-In podcast and featuring White House AI czar David Sacks.

OpenAI and Google DeepMind sparked math drama. Over the past few days, both OpenAI and Google DeepMind claimed their AI models had achieved gold-medal-level performance on the 2025 International Mathematical Olympiad—successfully solving 5 out of 6 notoriously difficult problems. It was a milestone that many considered years away: a general reasoning LLM reaching that level of performance under the same time limits as humans, without tools. But the way they announced it sparked controversy. OpenAI released its results first, based on its own evaluation using IMO-style questions and human graders—before any official verification. That prompted criticism from prominent mathematicians, including Terence Tao, who questioned whether the problems had been altered or simplified. In contrast, Google entered the competition officially, waited for the IMO's independent review, and only then declared its Gemini DeepThinker model had earned a gold medal—making it the first AI system to be formally recognized by the IMO as performing at that level. The drama laid bare the high stakes—and differing standards—for credibility in the AI race.

SoftBank and OpenAI are reportedly struggling to get $500 Billion Stargate AI Project off the ground. According to the Wall Street Journal, the $500 billion Stargate project—announced with fanfare at the White House six months ago by Masayoshi Son, Sam Altman, and President Trump—has hit major turbulence. Billed as a moonshot to supercharge U.S. AI infrastructure, the initiative has yet to break ground on a single data center, and internal disagreements between SoftBank and OpenAI over key terms like site location have delayed progress. Despite promises to invest $100 billion "immediately," Stargate is now aiming for a scaled-down launch: a single, small facility, likely in Ohio, by year’s end. It’s a setback for Son, who recently committed a record-breaking $30 billion to OpenAI but is still scrambling to secure a meaningful foothold in the AI arms race. However, Bloomberg reported today that Oracle will provide OpenAI with 2 million new AI chips that will be part of a massive data center expansion that OpenAI labeled as part of its Stargate project. SoftBank, though, isn’t financing any of the new capacity—and it's unclear what operator will be developing data centers to support the new capacity, and when they will be built.

EYE ON AI RESEARCH

Sounding the alarm on growing opacity of advanced AI reasoning models. Fortune reporter Beatrice Nolan reported this week on a group of 40 AI researchers, including contributors from OpenAI, Google DeepMind, Meta, and Anthropic, that are sounding the alarm on the growing opacity of advanced AI reasoning models. In a new paper, the authors urge developers to prioritize research into “chain-of-thought” (CoT) processes, which provide a rare window into how AI systems make decisions. They are warning that as models become more advanced, this visibility could vanish.

The “chain-of-thought” process, which is visible in reasoning models such as OpenAI’s o1 and DeepSeek’s R1, allows users and researchers to monitor an AI model’s “thinking” or “reasoning” process, illustrating how it decides on an action or answer and providing a certain transparency into the inner workings of advanced models.

The researchers said that allowing these AI systems to “‘think’ in human language offers a unique opportunity for AI safety,” as they can be monitored for the “intent to misbehave.” However, they warn that there is “no guarantee that the current degree of visibility will persist” as models continue to advance.

The paper highlights that experts don’t fully understand why these models use CoT or how long they’ll keep doing so. The authors urged AI developers to keep a closer watch on chain-of-thought reasoning, suggesting its traceability could eventually serve as a built-in safety mechanism.

FORTUNE ON AI

Mark Cuban says the AI war ‘will get ugly’ and intellectual property ‘is KING’ in the AI world —by Sydney Lake

$61.5 billion tech giant Anthropic has made a major hiring U-turn—now, it’s letting job applicants use AI months after banning it from the interview process —by Emma Burleigh

Experienced software developers assumed AI would save them a chunk of time. But in one experiment, their tasks took 20% longer —by Sasha Rogelberg

AI CALENDAR

July 26-28: World Artificial Intelligence Conference (WAIC), Shanghai. 

Sept. 8-10: Fortune Brainstorm Tech, Park City, Utah. Apply to attend here.

Oct. 6-10: World AI Week, Amsterdam

Oct. 21-22: TedAI San Francisco.

Dec. 2-7: NeurIPS, San Diego

Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Eric Xing MBZUAI 人工智能 软实力 开源AI 世界模型 AI战略
相关文章