How Helm.ai Uses Generative AI for Self-Driving Cars

Self -driving cars were now supposed to be in our garages now, according to optimistic predictions for only a few years. But we may approach a few turning points, with Robotaxi’s dependence and consumers’ accustomed to the driver’s assistance systems more and more advanced in their cars. One of the companies that drive things forward is the seat of the Silicon Valley Helm.AI, Which develops programs for both the driver’s assistance and autonomous vehicles.
The company provides the basis models to predict the intention and planning the path that self -driving cars need on the road, as well as using obstetric artificial intelligence to create artificial training data that prepare cars for many things that can be mistaken there. IEEE SICTRUM Talk with Vladislav FuroninskiHELM.AI CEO and CEO, about creating the company from Artificial data To train and verify self -driving car systems.
How to use Helm.ai artificial intelligence to help develop self -driving cars?
Vladislav Foreonsky: We use obstetric intelligence for simulation purposes. So, given a certain amount of the real data you have noticed, can you simulate new situations based on that data? You want to create realistic data as much as possible with a really new thing. We can create data from any camera or sensor to increase diversity in these data groups and processing the corner conditions for training and validation.
I know you have Vidgen To create video data and the world To create other types of sensor data. Are different car companies still dependent on different methods?
Foreonsky: There is definitely an interest in multiple methods of our customers. Not everyone is trying to do everything with vision only. Cameras are relatively cheap, while Lidar systems are more expensive. But we can actually train simulation devices that take the camera data and simulate Lidar output shape. It can be a way to save costs.
Even if it is just a video clip, there will be some cases that are incredibly rare, very impossible, or very dangerous, while you are leading in actual time. Thus, we can use obstetric artificial intelligence to create very high -quality video data and cannot be mainly distinguished from the real data of these cases. This is also a way to save data collection costs.
How can you create unusual edge cases? Do you say, “Now put the kangaroo on the way, now put a brutal donkey on the road?”
Foreonsky: There is a way to inquire about these models to carry on the production of unusual situations – they are really related to combining ways to control simulation models. This can be done using a text, fast pictures, or different types of engineering inputs. These scenarios can be explicitly identified: If the auto company has already has a list of washing from the situations they know they can happen, they can inquire about these basic models to produce these situations. You can also do something more able to develop as there is a process of exploration or random distribution of what is happening in simulation, and it can be used to test the self -driving pile for different situations.
And something nice about video data, which is definitely the prevailing way to self -driving, you can train on video data that not only comes from driving. So when it comes to the categories of rare objects, you can already find them in many different data collections.
So if you have an animal video data collection in a zoo, will the driving system help identify kangaroo on the road?
Foreonsky: Certainly, this type of data can be used to train perception systems to understand these groups of different objects. It can also be used to simulate sensors data that merge these objects into the driving scenario. I mean, likewise, a very few people have seen the kangaroo on a path of real life. Or even in the video. But it is easy to evoke your mind, right? And if you see that, you will be able to understand it very quickly. What is cute in artificial intelligence is if [the model] It is exposed to different concepts in different scenarios, which can combine these concepts in new situations. This can be observed in other situations and then brings this understanding to driving.
How do you control the quality of artificial data? How do you assure your customers that it is good like the real thing?
Foreonsky: There are measures that you can capture, which are numerically associated with real data with artificial data. One example is to take a set of real data and take a set of artificial data aimed at simulating it. You can suit the distribution of the possibilities for both. Then you can compare the numerical between the distance between these possibilities.
Second, we can check that artificial data is useful to solve some problems. You can say, “We will treat this corner condition. You can check that the use of simulation data actually solves the problem and improves accuracy in this task without training in real data.
Are there rejectionists who say that artificial data will not be good enough to train these systems and teach them everything they need to know?
Foreonsky: Rejectionists are usually not Amnesty International experts. If you search for the place where a picker is going, it is clear that simulation will have a significant impact on the development of independent driving systems. Also, what is good enough is a moving goal, such as definition of artificial intelligence or AGI[ artificial general intelligence]. Some developments are made, then people get used to them, “Oh, this is no longer interesting. But I think it is quite clear that artificial intelligence simulation will continue to improve. I am.Wayne explicitly want the artificial intelligence system to design something, there is no bottleneck at this stage. Then it is just a matter about its generalization.
From your site articles
Related articles about the web
2025-03-18 12:00:00