End-to-end learning for self-driving car

Massive improvements to artificial intelligence (AI) and sensor technology are driving the next generation of self-driving cars and autonomous vehicles into reality. Throughout the self-driving evolution, there have been many methods developed to train artificial intelligence algorithms to drive.

Autonomous vehicles using deep learning technologies, such as delivery robots and warehouse AVs, have already mastered navigation in their dedicated environments. But self-driving cars that you see on the road have not fully mastered all driving conditions.

In today’s self-driving car market, there is an unofficial “race to full autonomy.” Companies around the globe are competing to be the first publicly purchasable car that is capable of full self-driving. As such, these companies utilize use different sensor suites, AI algorithms, and training methods in their vehicles to achieve full autonomy. These training methodologies are regarded as the most foundational intellectual property of each company.

This article looks at some of the most popular aspects of autonomous vehicle deep learning methods and explains how end-to-end learning for self-driving cars is used to train modern self-driving cars.

Individual Systems Training Via End-to-End Learning

Self-driving cars may be realizable using traditional, non-AI computing methodologies. However, this methodology would hypothetically require the software to be able to action any and all possible conditions within the system.

For example, when a specific sensor reads value, the software must know how to explicitly handle that condition. However, given the infinite sensor values, environmental conditions, and unknown complexities, sub-system training has proven to be a rather feeble approach to self-driving cars.

This is evident in automotive manufacturers' attempts to automate simple systems. For example, when driving on a road trip and using cruise control, many cars will focus on only maintaining speed up a steep hill rather than efficiently and safely managing acceleration as a human would. There certainly are effective AI-based cruise-control models, but they often fail when combined with other subsystem models within the vehicle such as steering and braking. Therefore, self-driving car researchers and manufacturers have almost unanimously adopted an alternative training methodology referred to as end-to-end learning, which focuses on training the entire car, rather than a single sub-system at a time.

What is End-to-End Deep Learning for Self-Driving Cars?

In many ways, modern self-driving cars are a collective culmination of cutting-edge technologies. These include onboard sensors such as LIDAR systems, advanced cameras, radar, and AI-purposed computers. As such, nearly all self-driving cars are regarded as ‘ego-centric,’ given that they rely on these sensors to locally control their own locations. An alternative example to ego-centric self-driving cars would be a car that navigates through a city based on information gathered from sensors not mounted on the car, but rather placed throughout the city.

As such, ego-centric self-driving cars natively collect sensor data on their surroundings, understand that data, and use those understandings to pilot the vehicle. This end-to-end processing of data naturally influences how an autonomous vehicle is trained. End-to-end learning methodologies employ the same ego-centric model to build the algorithms and neural networks that allow vehicles to utilize their data in a “pixel to pavement” fashion. End-to-end training does not independently train subsystems of a self-driving car but rather trains the entire system as a collective whole.

For example, Tesla’s Data Engine autonomous vehicle training infrastructure collects data from a variety of sensors onboard the vehicles and processes that data to understand the conditions, obstacles, and key components of the environment. Based on this understanding of the environment, Tesla’s self-driving neural network model is able to control all subsystems within the vehicle instantaneously as it is understood. This end-to-end model allows all facets of the self-driving car to work in unison, without required input from a human.

Sub-methodologies of End-to-End Deep Learning

There are many strategies within end-to-end deep learning that can be used for autonomous vehicle training. Here are a few commonly seen methodologies used by a variety of companies. In many cases, more than one of these sub-methodologies can be used for redundant training and iterative improvements.

Vehicle Behavior Cloning

Behavior cloning training methods employ conditional imitation of how other cars navigate scenarios. For example, an AI model can be trained to mimic how other cars on the road navigate construction zones, rainy driving conditions, or even highways. In behavior cloning methodologies, vehicles utilize the input of all vehicles around them to educate their navigation, even if those vehicles are not self-driving. However, this methodology can be unreliable if a car is required to navigate without other cars around it to lead by example.

Mimicking Human Driving

Rather than cloning the behavior of other cars placed in similar scenarios, mimicking human driving allows for a much more ego-centric approach to training cars to operate autonomously. For example, Tesla’s self-driving models run constantly. When the human driver of the vehicle does something contrary to what the model would do, Tesla collects the human driving data to educate future models.

Bird’s-eye View Models

Bird’s-eye view (BEV) models can be created using data from a variety of sensors that culminate into a 2D or 3D model. This helps vehicles to understand common driving policies across a wide variety of driving scenarios. For example, vehicles that are entering a roundabout may only be able to interpret the best means of autonomous navigation if trained using a BEV model. While map-based navigation means does not apply to all navigation challenges, vehicles that are trained using BEV models are able to better understand standard driving policies and apply those to nuanced driving situations like five-way intersections, roundabouts, and construction zones.

Additionally, BEV models can be used for AV training in non-end-to-end learning models as well. Manufacturers can train cars to behave in certain mapped conditions, even if it is not using direct sensor data. For example, vast datasets such as Google Maps can be useful for training vehicles to navigate complex static intersections such as the High Five Interchange in Dallas.

Proprietary Methods

End-to-end learning using vast, well-labeled datasets will likely yield the first fully autonomous car. Of course, there are other machine learning for self-driving car sub-methodologies that are proprietary and unknown to the public domain. However, it is absolutely certain that the progress of self-driving cars in the last two decades is profound. The known self-driving training methodologies will contribute to a fully autonomous world in the near future.


最新消息

Sorry, your filter selection returned no results.

请仔细阅读我们近期更改的隐私政策。当按下确认键时,您已了解并同意艾睿电子的隐私政策和用户协议。

本网站需使用cookies以改善用户您的体验并进一步改进我们的网站。此处阅读了解关于网站cookies的使用以及如何禁用cookies。网页cookies和追踪功能或許用于市场分析。当您按下同意按钮,您已经了解并同意在您的设备上接受cookies,并给予网站追踪权限。更多关于如何取消网站cookies及追踪的信息,请点击下方“阅读更多”。尽管同意启用cookies追踪与否取决用户意愿,取消网页cookies及追踪可能导致网站运作或显示异常,亦或导致相关推荐广告减少。

我们尊重您的隐私。请在此阅读我们的隐私政策。