Sponsored by Looka AI – Exclusive lifetime deal

Helm.ai Introduces VidGen-2: High-Res Multi-Camera Video for Autonomous Driving

Today, Helm.ai announced VidGen-2, its next-generation generative AI model. Offering 2X higher resolution and supporting 30 frames per second, this model enhances realism with multi-camera capabilities. It provides automakers with a scalable and cost-effective solution for developing and validating autonomous driving technology.

The model is trained on thousands of hours of diverse driving footage with NVIDIA H100 GPUs. VidGen-2 uses Helm.ai’s innovative generative deep neural network and Deep Teaching™ for efficient unsupervised training. 

It produces realistic video sequences at 696 x 696 resolution—double that of VidGen-1—with frame rates from 5 to 30 fps. The model also enhances 640 x 384 resolution video quality at 30 fps for smoother simulations. Videos can be generated without an input prompt or from a single image or video.

VidGen-2 also supports multi-camera views, generating footage from three cameras at 640 x 384 resolution each. This ensures self-consistency across perspectives for accurate sensor simulations.

The model captures every perspective for driving scene videos across various geographies, camera types, and vehicles. It not only generates realistic images but is also trained to learn human-like driving behavior, simulating the movements of the ego vehicle and surrounding agents in line with traffic rules.

Also Read: Veo Integration Elevates YouTube Shorts with AI-Generated Videos

VidGen-2 generates a wide range of scenarios. The scenarios include highway and urban driving, diverse vehicle types, pedestrians, cyclists, intersections, turns, and varying weather and lighting conditions. In multi-camera mode, scenes are consistently generated across all perspectives.

This model provides automakers with a major scalability advantage over traditional non-AI simulators by enabling rapid asset generation and equipping agents with realistic behaviors. 

Helm.ai’s approach reduces development time and costs while closing the “sim-to-real” gap, offering a realistic and efficient solution that expands the possibilities for simulation-based training and validation.

Vladislav Voroninski, Helm.ai’s CEO and founder

“The latest enhancements in VidGen-2 are designed to meet the complex needs of automakers developing autonomous driving technologies. These advancements enable us to generate highly realistic driving scenarios while ensuring compatibility with a wide variety of automotive sensor stacks. The improvements made in VidGen-2 will also support advancements in our other foundation models, accelerating future developments across autonomous driving and robotics automation.”
Related News

Leave a Reply