Nvidia Blog 03月01日
Explore How RTX AI PCs and Workstations Supercharge AI Development at NVIDIA GTC 2025
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

生成式AI正在重新定义计算,在PC和工作站上开启构建、训练和优化AI模型的新方式。GTC 2025上,专家将分享AI本地部署、模型优化等方面的见解。RTX GPU的Tensor Cores提供计算性能,可用于多种应用。还探讨了语言模型、AI性能优化及本地AI发展等内容。

生成式AI在PC和工作站上改变计算方式,提升生产力。

RTX GPU的Tensor Cores为运行AI模型提供计算性能。

探讨大语言模型和小语言模型的特点及应用。

介绍优化Windows工作站上AI性能的策略和实践。

讲述本地AI发展的推进及相关解决方案和工具。

Generative AI is redefining computing, unlocking new ways to build, train and optimize AI models on PCs and workstations. From content creation and large and small language models to software development, AI-powered PCs and workstations are transforming workflows and enhancing productivity.

At GTC 2025, running March 17–21 in the San Jose Convention Center, experts from across the AI ecosystem will share insights on deploying AI locally, optimizing models and harnessing cutting-edge hardware and software to enhance AI workloads — highlighting key advancements in RTX AI PCs and workstations.

Develop and Deploy on RTX

RTX GPUs are built with specialized AI hardware called Tensor Cores that provide the compute performance needed to run the latest and most demanding AI models. These high-performance GPUs can help build digital humans, chatbots, AI-generated podcasts and more.

With more than 100 million GeForce RTX and NVIDIA RTX GPUs users, developers have a large audience to target when new AI apps and features are deployed. In the session “Build Digital Humans, Chatbots, and AI-Generated Podcasts for RTX PCs and Workstations,” Annamalai Chockalingam, senior product manager at NVIDIA, will showcase the end-to-end suite of tools developers can use to streamline development and deploy incredibly fast AI-enabled applications.

Model Behavior

Large language models (LLMs) can be used for an abundance of use cases — and scale to tackle complex tasks like writing code or translating Japanese into Greek. But since they’re typically trained with a wide spectrum of knowledge for broad applications, they may not be the right fit for specific tasks, like nonplayer character dialog generation in a video game. In contrast, small language models balance need with reduced size, maintaining accuracy while running locally on more devices.

In the session “Watch Your Language: Create Small Language Models That Run On-Device,” Oluwatobi Olabiyi, senior engineering manager at NVIDIA, will present tools and techniques that developers and enthusiasts can use to generate, curate and distill a dataset — then train a small language model that can perform tasks designed for it.

Maximizing AI Performance on Windows Workstations

Optimizing AI inference and model execution on Windows-based workstations requires strategic software and hardware tuning due to diverse hardware configurations and software environments. The session “Optimizing AI Workloads on Windows Workstations: Strategies and Best Practices,” will explore best practices for AI optimization, including model quantization, inference pipeline enhancements and hardware-aware tuning.

A team of NVIDIA software engineers will also cover hardware-aware optimizations for ONNX Runtime, NVIDIA TensorRT and llama.cpp, helping developers maximize AI efficiency across GPUs, CPUs and NPUs.

Advancing Local AI Development

Building, testing and deploying AI models on local infrastructure ensures security and performance even without a connection to cloud-based services. Accelerated with NVIDIA RTX GPUs, both Dell Pro Max AI and Z by HP solutions provide powerful tools for on-prem AI development, helping professionals maintain control over data and IP while optimizing performance.

Learn more by attending the following sessions:

Developers and enthusiasts can get started with AI development on RTX AI PCs and workstations using NVIDIA NIM microservices. Rolling out today, the initial public beta release includes the Llama 3.1 LLM, NVIDIA Riva Parakeet for automatic speech recognition (ASR), and YOLOX  for computer vision.

NIM microservices are optimized, prepackaged models for generative AI. They span modalities important for PC development, and are easy to download and connect to via industry-standard application programming interfaces.

Attend GTC 2025

From the keynote by NVIDIA founder and CEO Jensen Huang to over 1,000 inspiring sessions, 300+ exhibits, technical hands-on training and tons of unique networking events — GTC is set to put a spotlight on AI and all its benefits.

Follow NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

生成式AI RTX GPU 语言模型 AI性能优化 本地AI发展
相关文章