MarkTechPost@AI 2024年08月08日
NYU Researchers Open-Sourced GPUDrive: A GPU-Accelerated Multi-Agent Driving Simulation at 1 Million FPS
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

NYU 研究者推出 GPUDrive,这是一款用于多智能体驾驶场景的创新模拟器,具有高速模拟和强化学习能力。

🎯GPUDrive 旨在解决多智能体学习中的挑战,将现实世界驾驶数据与高速模拟能力相结合,应用于自动驾驶规划器设计。它在消费级和数据中心级 GPU 上运行速度超百万步每秒,支持大量同时运行的世界和每个世界中的众多智能体。

🚗该模拟器提供多种传感器模式,包括激光雷达和类人视角锥,可研究不同传感器类型对智能体特征的影响。它还利用多种技术创新解决驾驶模拟中的特定挑战,如使用 BVH 跟踪物理实体、应用折线简化算法等。

💪GPUDrive 在模拟速度和强化学习方面表现出色,在消费级 GPU 上实现每秒超百万智能体步,训练速度比 Nocturne 快 25 - 40 倍,能有效利用大型数据集,提高样本效率,加速多智能体学习研究。

Multi-agent planning for mixed human-robot environments faces significant challenges. Current methodologies, often relying on data-driven human motion prediction and hand-tuned costs, struggle with long-term reasoning and complex interactions. Researchers aim to solve two primary issues: developing human-compatible strategies without clear equilibrium concepts and generating sufficient samples for learning algorithms. Existing approaches, while effective in scaling real-world autonomy, falter in rare, complex scenarios. The divergence between techniques used in zero-sum games and practical robotic systems highlights the need for innovative solutions that can bridge this gap and improve multi-agent planning in human-robot settings.

Existing approaches to multi-agent planning in mixed human-robot environments include various frameworks and simulators. Open-source platforms like JaxMARL, Jumanji, and VMAS offer hardware-accelerated environments for fully cooperative or competitive tasks. GPUDrive, built on Madrona, provides a mixed-motive setting with GPU acceleration, supporting numerous agents across diverse scenarios and including human demonstrations.

In autonomous driving, simulators like MetaDrive, nuPlan, Nocturne, and Waymax utilize real-world data. GPUDrive focuses on behavioral and control aspects, offering GPU acceleration, various sensor modalities, and extensive scalability. Simulators often feature baseline agents such as car-following models, rule-based agents, and recorded human driving logs. Some incorporate learning-based agents using reinforcement learning. GPUDrive combines human driving logs with high-performing reinforcement learning agents, creating a comprehensive environment for studying multi-agent learning in autonomous driving scenarios.

Researchers from New York University and Stanford University introduced GPUDrive, an innovative simulator designed to overcome the challenges in multi-agent learning for self-driving planners. It combines real-world driving data with high-speed simulation capabilities, enabling the application of sample-inefficient but effective reinforcement learning algorithms to planner design. Running at over a million steps per second on both consumer-grade and datacenter-class GPUs, GPUDrive supports hundreds to thousands of simultaneous worlds with hundreds of agents per world. The simulator offers a variety of sensor modalities, including LIDAR and human-like view cones, allowing researchers to study the effects of different sensor types on agent characteristics. GPUDrive’s ability to incorporate driving logs and maps from existing self-driving datasets facilitates the integration of imitation learning tools with reinforcement learning algorithms.

GPUDrive’s simulation design addresses the challenges of generating billions of environment samples for multi-agent learning in self-driving scenarios. Built on the Madrona framework, it offers high-throughput reinforcement learning environments with parallel execution of multiple independent worlds on accelerators. This simulator tackles specific challenges in driving simulation through several technical innovations. It uses a Bounding Volume Hierarchy (BVH) to efficiently track physics entities and reduce collision checks. A polyline decimation algorithm is applied to simplify road geometry, significantly reducing memory usage and improving step times. Also, it supports various observation spaces, including a radius-based observation, LIDAR scans, and a human-like view cone. It uses the Waymo Open Motion Dataset, representing maps as polylines and including expert human driving demonstrations. Agent dynamics are modeled using both Ackermann and simplified bicycle models, allowing for different vehicle characteristics and invertibility for imitation learning.

GPUDrive demonstrates exceptional performance in simulation speed and reinforcement learning. It achieves over a million Agent Steps Per Second on consumer-grade GPUs, significantly outperforming CPU-based implementations. The simulator provides a 25-40x training speedup compared to Nocturne, solving scenarios in minutes instead of hours. GPUDrive’s scalability is evident as it improves sample efficiency with larger datasets, taking only 15 seconds per scenario when training on 1024 unique scenarios. This performance enables effective utilization of large datasets like the Waymo Open Motion Dataset, even with limited computational resources, potentially accelerating multi-agent learning research in autonomous driving.

This research introduces GPUDrive, an innovative GPU-accelerated simulator designed to generate the vast amount of data needed for effective reinforcement learning in multi-agent driving scenarios. By utilizing the Madrona Engine, it achieves remarkable throughput, processing millions of steps per second across hundreds of worlds and agents. This efficiency dramatically reduces training time, allowing for scenario-solving in minutes or even seconds when amortized. While It represents a significant advancement in scaling reinforcement learning for multi-agent planning in autonomous driving, the researchers acknowledge remaining challenges, including optimizing hyperparameters, addressing reset call impacts, and achieving human-level driving performance across all scenarios.


Check out the Paper and GitHub. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. If you like our work, you will love our newsletter..

Don’t Forget to join our 48k+ ML SubReddit

Find Upcoming AI Webinars here


The post NYU Researchers Open-Sourced GPUDrive: A GPU-Accelerated Multi-Agent Driving Simulation at 1 Million FPS appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

GPUDrive 多智能体学习 驾驶模拟器 强化学习
相关文章