Nvidia Blog 07月11日 13:15
How to Run Coding Assistants for Free on RTX AI PCs and Workstations
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了AI编程助手如何革新软件开发,从经验丰富的开发者到新手程序员,都能从中受益。这些助手能够简化编码任务、减少重复工作、加速学习过程。文章重点介绍了本地运行AI助手的优势,如保持代码私密性、降低成本,并推荐了Continue.dev、Tabby等工具。通过NVIDIA GeForce RTX GPU的加速,这些助手能够提供快速、响应灵敏的体验,提升开发效率和学习效果。文章还提到了NVIDIA为鼓励开发者探索AI技术而举办的活动。

💡AI编程助手正在改变软件开发方式,它们能够提供代码建议、解释和调试功能,极大地提升了开发效率。

🧑‍💻这些助手对经验丰富的开发者和新手程序员都非常有帮助,前者可以专注于复杂任务,减少重复劳动;后者则能加速学习,理解代码逻辑。

☁️本地运行的AI编程助手具有诸多优势,例如代码私密性、无需订阅费用,而且通过如NVIDIA GeForce RTX GPU等高性能硬件可以提供快速响应。

🛠️文章推荐了Continue.dev、Tabby、OpenInterpreter、LM Studio和Ollama等工具,这些工具支持在本地运行AI模型,方便开发者使用。

🚀NVIDIA GeForce RTX GPU的加速对于AI编程助手至关重要,它能显著提升运行速度,例如在 RTX 笔记本电脑上,Meta Llama 3.1-8B 模型的吞吐量是 CPU 的 5-6 倍。

Coding assistants or copilots — AI-powered assistants that can suggest, explain and debug code — are fundamentally changing how software is developed for both experienced and novice developers.

Experienced developers use these assistants to stay focused on complex coding tasks, reduce repetitive work and explore new ideas more quickly. Newer coders — like students and AI hobbyists — benefit from coding assistants that accelerate learning by describing different implementation approaches or explaining what a piece of code is doing and why.

Coding assistants can run in cloud environments or locally. Cloud-based coding assistants can be run anywhere but offer some limitations and require a subscription. Local coding assistants remove these issues but require performant hardware to operate well.

NVIDIA GeForce RTX GPUs provide the necessary hardware acceleration to run local assistants effectively.

Code, Meet Generative AI

Traditional software development includes many mundane tasks such as reviewing documentation, researching examples, setting up boilerplate code, authoring code with appropriate syntax, tracing down bugs and documenting functions. These are essential tasks that can take time away from problem solving and software design. Coding assistants help streamline such steps.

Many AI assistants are linked with popular integrated development environments (IDEs) like Microsoft Visual Studio Code or JetBrains’ Pycharm, which embed AI support directly into existing workflows.

There are two ways to run coding assistants: in the cloud or locally.

Cloud-based coding assistants require source code to be sent to external servers before responses are returned. This approach can be laggy and impose usage limits. Some developers prefer to keep their code local, especially when working with sensitive or proprietary projects. Plus, many cloud-based assistants require a paid subscription to unlock full functionality, which can be a barrier for students, hobbyists and teams that need to manage costs.

Coding assistants run in a local environment, enabling cost-free access with:

Coding assistants running locally on RTX offer numerous advantages.

Get Started With Local Coding Assistants

Tools that make it easy to run coding assistants locally include:

These tools support models served through frameworks like Ollama or llama.cpp, and many are now optimized for GeForce RTX and NVIDIA RTX PRO GPUs.

See AI-Assisted Learning on RTX in Action

Running on a GeForce RTX-powered PC, Continue.dev paired with the Gemma 12B Code LLM helps explain existing code, explore search algorithms and debug issues — all entirely on device. Acting like a virtual teaching assistant, the assistant provides plain-language guidance, context-aware explanations, inline comments and suggested code improvements tailored to the user’s project.

This workflow highlights the advantage of local acceleration: the assistant is always available, responds instantly and provides personalized support, all while keeping the code private on device and making the learning experience immersive.

That level of responsiveness comes down to GPU acceleration. Models like Gemma 12B are compute-heavy, especially when they’re processing long prompts or working across multiple files. Running them locally without a GPU can feel sluggish — even for simple tasks. With RTX GPUs, Tensor Cores accelerate inference directly on the device, so the assistant is fast, responsive and able to keep up with an active development workflow.

Coding assistants running on the Meta Llama 3.1-8B model experience 5-6x faster throughput on RTX-powered laptops versus on CPU. Data measured uses the average tokens per second at BS = 1, ISL/OSL = 2000/100, with the Llama-3.1-8B model quantized to int4.

Whether used for academic work, coding bootcamps or personal projects, RTX AI PCs are enabling developers to build, learn and iterate faster with AI-powered tools.

For those just getting started — especially students building their skills or experimenting with generative AI — NVIDIA GeForce RTX 50 Series laptops feature specialized AI technologies that accelerate top applications for learning, creating and gaming, all on a single system. Explore RTX laptops ideal for back-to-school season.

And to encourage AI enthusiasts and developers to experiment with local AI and extend the capabilities of their RTX PCs, NVIDIA is hosting a Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16. Participants can create custom plug-ins for Project G-Assist, an experimental AI assistant designed to respond to natural language and extend across creative and development tools. It’s a chance to win prizes and showcase what’s possible with RTX AI PCs.

Join NVIDIA’s Discord server to connect with community developers and AI enthusiasts for discussions on what’s possible with RTX AI.

Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 

Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.

Follow NVIDIA Workstation on LinkedIn and X

See notice regarding software product information.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 编程助手 软件开发 NVIDIA RTX GPU加速
相关文章