Helm.ai

Helm.ai

Software Development

Menlo Park, California 6,898 followers

Helm.ai is building the next generation of AI technology for automation.

About us

Helm.ai is building the next generation of AI technology for ADAS, autonomous driving, and robotic automation.

Website
http://helm.ai
Industry
Software Development
Company size
51-200 employees
Headquarters
Menlo Park, California
Type
Privately Held
Founded
2016

Locations

Employees at Helm.ai

Updates

  • View organization page for Helm.ai, graphic

    6,898 followers

    We’re pleased to officially introduce VidGen-1, our state-of-the-art generative AI video model designed for high-end ADAS through L4 autonomous driving development and validation. Trained on thousands of hours of video, VidGen-1 generates a diversity of driving scenarios covering different geographies, landscapes, camera types, and vehicle perspectives. VidGen-1 is based on our innovations in generative DNN architectures and Deep Teaching technology, enabling it to generate highly realistic driving video, which has significance for both prediction and simulation tasks. When leveraged for generative simulation, VidGen-1 can effectively bridge the "sim-to-real" gap, and the technology behind it is adaptable to real-time AI-based intent prediction and path planning software across the AV stack. Our video generation approach is furthermore versatile, in that it can be applied to autonomous driving, robotics, and any other industry that also relies on video data. Learn more about VidGen-1 here: https://lnkd.in/gwe_shF9 #autonomousdriving #selfdrivingcars #artificialintelligence #generativeai #adas #computervision #helmai

  • Helm.ai reposted this

    View profile for Vladislav Voroninski, graphic

    CEO at Helm.ai (We're hiring!)

    Which of these videos are real and which are AI generated? Video data is the most information-rich sensory modality available in autonomous driving and it comes from the cheapest sensor in the self-driving stack, the camera. To train and validate safety-critical AI systems, it's important to simulate highly realistic data in corner case scenarios that are infrequently or never observed during data collection. Due to the inherent high dimensionality of video data, AI video generation is a challenging autoregressive modeling task. In particular, achieving a high level of image quality while accurately modeling the dynamics of a moving scene is a well known difficulty in video generation applications. This makes simulating realistic driving video sequences one of the holy grails of generative simulation for autonomous driving. Not only does video modeling come the closest to modeling the full reality of the observed world, it also allows for a simple interface that enables the testing of any given AV stack. We have applied our Deep Teaching technology to train a generative foundation model for producing realistic video sequences of driving scenes. This capability is important for both prediction tasks and generative simulation in the AV stack. All of the videos in the clip below are AI generated with our model. More details to follow soon! #autonomousdriving #artificialintelligence #generativeai #computervision #selfdrivingcars #cvpr2024 #helmai

  • View organization page for Helm.ai, graphic

    6,898 followers

    💡 How will unsupervised learning evolve in autonomous driving? 🚙 How does Helm.ai provide OEMs with unique differentiation advantages? Tune into the latest episode of SAE International's Tomorrow Today podcast, where our CEO and co-founder Vladislav Voroninski discusses autonomous driving trends and the pivotal roles that unsupervised learning and Helm.ai play in advancing AV software toward production deployment. A huge thanks to Grayson Brulte for hosting the discussion! Listen to the full episode here: https://lnkd.in/dQSifjnk

  • View organization page for Helm.ai, graphic

    6,898 followers

    Navigating through urban environments, in particular handling intersections and traffic lights, is a complex challenge for an autonomous vehicle. Beyond the basic 'go' and 'stop' actions based on traffic light state, there are many nuanced scenarios that require a combination of understanding more complex traffic rules, negotiating the right of way, and carefully judging the intent of other agents. Such scenarios, for example, include yielding before making a left turn or slowing down for a lead vehicle that’s about to make a right turn. How does your ADAS or L4 stack ensure safe navigation in these situations? Incorporating hand-crafted rules simply doesn’t scale to tackle the countless possibilities. Helm.ai's foundation models for intent and path prediction, trained through Deep Teaching (unsupervised learning), are able to learn the complexities of how vehicles and pedestrians interact with traffic lights, different intersection geometries, and each other, based purely on observing real driving data. Our foundation models predict how each agent, including the autonomous vehicle, might plan its path in a wide variety of relevant urban scenarios. In the demo below, you’ll see our foundation models predicting 9 seconds into the future based on 3 seconds of observed real driving data, at 5 frames per second. This advanced prediction capability allows the autonomous vehicle (the “ego vehicle”) to anticipate the maneuvers of surrounding vehicles, and plan a safe path compliant with traffic rules, all without relying on hand crafted features nor traditional simulators. As we expand our world model representation and train larger models on more driving data, the resulting intent and path prediction models will continue to improve with additional scalable and cost-effective generalization capabilities for ADAS and L4. Stay tuned for more updates! #helmai #generativeai #selfdrivingcars #artificialintelligence #ai #autonomousdriving #adas #computervision

  • View organization page for Helm.ai, graphic

    6,898 followers

    We're excited to announce our generative simulation foundation models for perception 🎉 Our simulation DNNs can create highly realistic driving scenes based on variations of real-world encounters or they can be generated purely synthetically using text- or image-based prompts. Our models can simulate driving scenes that include variations in illumination and weather conditions, times of day, geographical locations, and types of landscapes (urban/highway). They also handle various road geometries and markings. Users have the flexibility to modify traffic, pedestrians, and other road obstacles. The resulting system generates high-fidelity images and the corresponding segmentation labels, which can be used for training and validation across a wide variety of scenarios and corner cases. What you see in our video is just the beginning of our generative simulation capabilities. Stay tuned for more updates! For more information, check out our press release: https://lnkd.in/g_cG9X2x #generativeai #selfdrivingcars #artificialintelligence #ai #autonomousdriving #adas #computervision 

Similar pages

Browse jobs

Funding

Helm.ai 8 total rounds

Last Round

Series C

US$ 55.0M

See more info on crunchbase