未知数据源 2024年09月15日
Magic partners with Google Cloud to train frontier-scale LLMs
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

全球过半生成式AI初创企业,包括超90%的独角兽,依托谷歌云构建下一代AI应用。Magic等众多企业与谷歌云合作,利用其基础设施等资源推动创新。

🎯全球过半生成式AI初创企业,超90%的生成式AI独角兽选择谷歌云,借助其信任的基础设施、多种硬件系统、Vertex AI平台等,开发AI应用、模型和工具。

🌟Magic与谷歌云合作,建造基于云的超级计算机,用于开发具有超长大上下文窗口的代码助手,将利用谷歌云的AI Hypercomputer架构和工具,训练前沿规模的LLM。

💻众多AI初创企业如Arize AI、Character AI、Cohere等,利用谷歌云的服务构建和扩展平台,涵盖模型观测评估、数据库扩展、AI模型训练等多个方面。

More than half of the world’s generative AI startups, including more than 90% of generative AI unicorns, are building on Google Cloud — utilizing our trusted infrastructure, a variety of hardware systems, the  Vertex AI platform, and much more. These startups are building the next generation of AI applications, models, and tooling that will be used by millions of businesses, entrepreneurs, developers, students, and more in the coming months and years.

Supporting Magic with Google Cloud’s AI Platform

Today, Magic announced it is partnering with Google Cloud to build two new cloud-based supercomputers that will support Magic’s mission of developing code assistants with a context window reaching 100 million tokens (enough information to equal 10 years of human speech). Magic has selected Google Cloud as its preferred cloud provider, and the San Francisco-based startup will utilize Google Cloud’s AI Hypercomputer architecture and tooling, which will help build frontier-scale AI models that can automate aspects of software engineering.

With Google Cloud, Magic will build its G4 supercomputer utilizing A3 Mega VMs powered by NVIDIA H100 Tensor Core GPUs. For their next-generation G5 supercomputer, they will be one of the first users to migrate to the NVIDIA Grace Blackwell platform on Google Cloud when they become available early next year, scaling up to tens of thousands of GPUs. These computers will be able to achieve 160 exaflops, a measure of computing performance so large, it’s roughly equal to 160 billion people each holding one billion calculators and running a computation at the same exact moment.

Magic’s goal is to build an “automated AI software engineer and researcher” that can see and understand an organization’s entire code repository and complete large tasks over long time horizons. To do this, they are training frontier-scale LLMs with ultra-long context windows and other advanced capabilities. The compute required to train these models, and perform AI inference, is significant; Google Cloud is providing the trusted AI platform, reliable access to compute power, and first-hand experience scaling AI products to help the Magic team bring its products to market.

Magic CEO and founder Eric Steinberger said: “Magic’s goal is to build AGI, and that will take a lot of compute. Google Cloud will be a valuable partner to Magic as we train and serve our next-gen models. Google’s experience operating the largest infrastructure systems in the world will help our team be maximally effective, and their team has been incredibly supportive as we started ramping up.”

Driving Innovation at the World’s Most Exciting AI Startups

Magic joins a thriving ecosystem of exciting startup businesses who are building foundational models, AI tooling, and applications on Google Cloud. For example:

  • Arize AI, which offers a platform for model observability and evaluation, continues to utilize Google Cloud services including GKE, Vertex AI, and Google Cloud Marketplace to build and scale its platform and bring it to market.
  • Character AI named Google Cloud as its preferred cloud infrastructure in 2023, and utilizes TPUs and GPUs for faster training and inference of its models, as well as our AlloyDB database to exponentially scale its database load.
  • Cohere has partnered with Google Cloud since 2021 to utilize TPUs for training and inference with its enterprise-grade frontier AI models, and to bring its security and privacy focused platform to market on Google Cloud Marketplace.
  • Labelbox powers Google Cloud’s LLM evaluation service, and Google Cloud recently partnered with Labelbox to allow Vertex AI customers to seamlessly leverage human raters to evaluate LLM responses while handling the entire workforce and labeling orchestration.
  • Mistral began working with Google Cloud in 2023, using our AI-optimized infrastructure, including TPUs, to scale up its LLMs and offering its foundational model, Mistral-7B, on Vertex AI.
  • Glean uses a mix of Google Cloud services for its AI assistant and enterprise search platform, including BigQuery for data analytics, TPUs for model training, and App Engine and GKE to scale its platform reliability.
  • Higgsfield is using Gemini and our AI-optimized infrastructure, including GPUs, to power its AI video creation platform and for training and inference of its proprietary model. Google Cloud is also helping Higgsfield implement AI safety standards, including watermarking, to help prevent the production of malicious content.
  • Jasper is working with Google Cloud to power its marketing content creation tools, including utilizing Gemini models to help users automatically generate content like blog posts or product descriptions for their customers.
  • Repl.it is a popular AI-powered software development and deployment platform. The company utilizes Google Cloud services, infrastructure, and Gemini models to help its 20 million-plus users create high-quality code more quickly.
  • ThoughtSpot’s search and AI-powered analytics platform makes it simple to ask and answer questions with data. ThoughtSpot integrates Gemini models to power its AI features, helping customers tap into new levels of productivity.
  • Typeface, the generative AI platform for enterprise content creation, empowers Fortune 500 brands and enterprise marketers to create multimodal branded content. An early partner of Google’s GenAI foundational models, Typeface delivers end-to-end content workflows across Google platforms, including Google Cloud, Google Ads, and Google Workspace.
  • Weights & Biases’ Weave is a user-friendly, lightweight toolkit designed to help developers track and evaluate their Gemini family of multi-modal LLMs in a more organized and efficient manner. The Gemini ecosystem includes some of the most powerful models, featuring extremely long context and multimodal capabilities that allow reasoning across text, images, audio and video.
  • Writer uses a variety of Google Cloud services to power its generative AI enterprise content platform, including AlloyDB, GKE, BigQuery, and GPUs on Google Cloud.

You can learn more about Magic’s work to train large scale AI models on Google Cloud here.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

谷歌云 AI初创企业 创新合作 AI应用
相关文章