Fortune | FORTUNE 2024年10月21日
Investors pour into photonics startups to stop data centers from hogging energy and to speed up AI
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Oriole Networks计划用光子学构建全新AI超算集群网络基础设施,多家光子学相关公司获大量投资。光子学可解决AI数据中心的能耗与传输问题,不同公司有不同应用思路,Oriole Networks愿景宏大,但其发展也面临挑战。

Oriole Networks是一家英国公司,计划用光子学构建全新的AI超算集群网络基础设施,以光代替电来传输数据,已从伦敦风投公司Plural获得2200万美元投资。

光子学成为热门话题,因其可解决AI数据中心巨大电力需求和训练大型AI模型耗时的问题。多家相关公司获大量投资,不同公司对光子学应用有不同想法。

Oriole Networks的愿景最为宏大,欲用光子学连接超算集群中的每一个AI芯片,可大幅提高训练速度并降低能耗,但也面临制造、供应链等挑战。

Oriole Networks从伦敦大学学院衍生,依赖创始人过去二十年的技术,已获多轮投资,计划2025年为潜在客户提供测试设备,并与多家公司进行了讨论。

Oriole Networks, a British company with plans for a completely new networking infrastructure for AI supercomputing clusters that is based on using light instead of electricity to transmit data, has raised $22 million from the London-based venture capital firm Plural. Photonics, which is the science of generating, manipulating, and detecting light, is suddenly a hot topic in the tech industry as a potential solution to two big problems facing AI data centers: their colossal electricity demands and the time it can take train the largest AI models on massive datasets. Just this week, two other companies working on photonic networking for AI chips announced major funding rounds.Lightmatter, announced it had raised $400 million in a venture capital deal led by T. Rowe Price that values the seven-year-old company at $4.4 billion. And Xscape Photonics announced it had closed a $44 million investment round led by IAG Capital, with the venture capital arm of network equipment maker Cisco and Nvidia among its other investors.No valuation figures were announced as part of either Xscape’s or the Oriole Networks’ fundraises, both of which were Series A rounds.The reason photonics is suddenly in vogue has to do with a series of challenges tech companies are encountering as they seek to build ever larger data centers stuffed with hundreds of thousands of specialized chips—in most cases, graphics processing units (or GPUs)— used for training and running AI applications. Conventional networking and switching equipment, which primarily uses copper wiring through which electricity is passed to convey information, is itself becoming a bottleneck to how quickly and easily large AI models can be trained. In other cases, fiberoptics are used, but with only a few colors of light traveling in a single cable, which also constrains how much information can be transmitted.AI models based on neural networks must shuttle a lot of data continuously back and forth through the entire network. But moving all this data between GPUs, including those that might be located in distant server racks, depends on wiring pathways and the capacity of switching equipment to send data zipping to the right place.The way many large AI supercomputing clusters are wired, data traveling from one computer chip to another located elsewhere in the cluster, might have to make as many as nine hops through different network switches before it reaches its destination, George Zervas, Oriole Network’s cofounder and chief technology officer, said.The larger the AI model and the more server racks involved, the more likely it is that this roadway of wiring will become congested, similar to how traffic jams delay commuters. For the largest AI models, 90% of their training time can consist of waiting for data in transit across the supercomputing cluster as opposed to the time it actually takes the chips to run the necessary computations.Conventional networking equipment, which uses electricity to transmit data, also contributes significantly to the energy requirements of data centers, both by directly consuming power, and because the copper wiring dissipates heat, meaning more energy is required to cool the data center. In some data centers, the networking equipment alone can account for 20% of the facility’s overall energy consumption.Depending on what energy source is used to power the data center, this electrical demand can result in a colossal carbon footprint. Meanwhile, many data centers require vast quantities of water to help cool the racks of chips used to run AI applications.Cloud computing companies are anticipating power needs for future AI data centers that are driving them to extreme lengths to secure enough energy. Google, Amazon, and Microsoft have all struck deals that would see nuclear reactors dedicated solely to powering their data centers. Meanwhile, OpenAI had briefed the U.S. government on a plan to possibly construct multiple data centers that would each consume five gigawatts of power annually, more than the entire city of Miami currently does.Photonics potentially solves all of these challenges. Using fiberoptics to transmit data in the form of light instead of electricity makes it possible to connect more of the chips in a supercomputing cluster directly to one another, reducing or eliminating the need for switching equipment. Photonics also uses far less electricity to transmit data than electronics and photonic signals produce no heat in transit.Different photonic companies have different ideas about how to use the technology to revamp data centers. Lightmatter is creating a product called Passage that is a light-conducting surface onto which multiple AI chips could be mounted, allowing photonic data transmission between any of the chips on that Passage surface without the need for cabled connections or copper wiring. Fiberoptic cabling would then be used to connect multiple Passage products in a single server rack and for the connections between racks. Xscape envisions using photonic equipment and cabling that can transmit and detect hundreds of different colors of light through a single cable, vastly increasing the amount of data that could flow through the network at any one time.But Oriole Networks’ may have the most sweeping vision, using photonics to connect every AI chip in a supercomputing cluster to every other chip in the entire cluster. This could result in training times for the largest AI models—such as OpenAI’s GPT-4—that are up to 10 to 100 times faster, Oriole Networks said. It can also mean networks can be trained using a fraction less power than today’s AI supercomputing clusters consume.To accomplish this, Oriole envisions not just new photonic communication equipment but also new software to help program the network, and a new hardware device that can act as the “brain” for the entire network, determining which packets of information will need to be sent between which chips at exactly what moment.“It’s completely radical,” Oriole CEO James Regan said. “There’s no electrical packet switching in the network at all.”Oriole Networks was spun-out from University College London in 2023, but it relies on technology that its founders, in particular Zervas, pioneered over the past two decades. In addition to Zervas, who is a veteran photonics researcher, UCL PhD. student Alessandro Ottino and post-doctoral fellow Joshua Benjamin, who is an expert in designing communication networks, cofounded the company. They brought on Regan, an experienced entrepreneur who helped create a previous photonics company, as CEO.The company currently employs 30 people. It raised an initial Seed funding round of $13 million in March from a group of investors that includes the venture capital arm of XTX Markets, which operates one of the largest GPU clusters in Europe. UCL Technology Fund, XTX Ventures, Clean Growth Fund, and Dorilton Ventures also all participated in both the Seed round and the most recent Series A investment.Regan said that Oriole is using other companies to manufacture the photonic equipment it is designing, which will enable the company to keep its capital requirements lower than would otherwise be the case and enable the company to move faster. He said it aims to have initial equipment with potential customers to test in 2025.The company has held discussions with most of the “hyperscale” cloud service providers as well as a number of semiconductor companies manufacturing GPUs and AI chips.Ian Hogarth, the partner at Plural who led the Series A investment, said that he was drawn to Oriole Networks because it represented “a paradigm shift” rather than an incremental approach to making AI data centers more energy and resource efficient. Hogarth, who is also the chair of the U.K.’s AI Safety Institute, said he was impressed by the “raw ambition and speed that [Oriole’s] founders have brought to the problem.”He said the company fit in with other investments Plural has made into companies helping to combat climate change. Finally, he said he felt it was important for Europe “to have really hard assets when it comes to the evolution of the compute stack, and to not squander the opportunity to translate brilliant inventions from European universities, UK universities, into iconic companies.”Of course, there’s been hype about photonics before, and it hasn’t always panned out. During the first internet boom of the late 1990s and early 2000s, there was also great excitement about the possibility of photonics to become the primary backbone for the internet, including for switching equipment. Venture capitalists back then also poured money into the sector. But most of those investments failed to pan out because of a lack of maturity in the photonics industry. Parts were difficult and expensive to manufacture and had higher failure rates than semiconductors and more conventional electronic switching equipment. Then, when the dot com bubble burst, it largely took the photonics boom down with it.Regan says that things are different today. The ecosystem of companies making photonic integrated circuits and photonic equipment is more robust than it was and the technology far more reliable, he said. A decade ago, a company like Oriole Networks would have had to manufacture much of the equipment it wants to produce itself—a much more capital intensive and risky proposition. Today, there is a reliable supply chain of contract manufacturers that can execute designs developed by Oriole, he said.Recommended newsletter Data Sheet: Stay on top of the business of tech with thoughtful analysis on the industry's biggest names. Sign up here.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Oriole Networks 光子学 AI数据中心 网络基础设施 能源效率
相关文章