More sensors, including cameras, LiDAR, and radar systems, are installed in new vehicles. Those sensors collect and transmit information that could help other cars and traffic authorities avoid accidents and identify road hazards.
One of the main challenges for traffic authorities is to continuously update information about road conditions. Without having a direct view of streets and highways, it is almost impossible to keep track of continuous changes in traffic conditions, hazards, temporary works, and other things happening in real-time.
Newer, connected vehicles equipped with a battery of sensors to capture their environment could be lifesavers if they can send sensory information to the traffic management system, continuously updating current conditions.
Connected cars have a trove of computer vision sensors mapping their environment
Today, it is uncommon to find a new model not equipped with cameras or other detection equipment. Even the most basic car models have proximity sensors to help drivers park and move around tight spaces.
Advanced models also have several cameras to detect other vehicles and infrastructure around them. Those cameras, continuously monitoring the road with 360º vision, are helping Advanced Driver Assistance Systems (ADAS) to alert drivers of lane changing and other vehicles approaching or braking. Additionally, those cameras enable self-parking and limited driverless features in some models.
Many vehicles with sophisticated ADAS features also have other sensors, such as radar and Light Imaging Detection and Ranging (LiDAR). Both technologies are now critical to enabling partial and fully autonomous driving vehicles.
LiDAR was used in the first experimental driverless cars. It is a technology that detects objects on the surface, as well as their size and exact disposition. NASA invented the technology over 45 years ago for measuring distance in space. LiDAR has the advantage of producing a digital copy of any physical object (a digital twin) and mapping the environment around the vehicle.
Today, LiDAR sensors are compact and consume much less energy than earlier models. The new sensors can be installed in several locations around a vehicle, allowing its computer to “see” the real world in 3D around it. The biggest drawback of LiDAR is that it doesn’t work well in cloudy weather conditions or inside the fog. Because of the use of light beans, it requires a clear line of sight to map its surroundings accurately.
To supplement LiDAR and work on those challenging conditions for cameras, some models also use radar. The technology, widely used for aviation and law enforcement, uses radio waves to measure the distance and composition of different objects. Its primary limitation is not being able to detect small objects. It can detect larger objects, such as other vehicles, but it can’t provide a precise image. That’s why most vehicles use radar as a backup of LiDAR, not a replacement for it.
Many vehicles now depend on new, advanced high-definition cameras for most ADAS features. Technology from companies such as Mobileye, develop sophisticated cameras and software to recreate the way humans see the environment and react to it.
Mobileye’s philosophy has been that “if a human can drive a car based on vision alone – so can a computer.” They argue that while other sensors may provide redundancy for object detection – the camera is the only real-time sensor for driving path geometry and other static scene semantics (such as traffic signs, on-road markings, etc.).
Vision-Zero projects are failing because of lack of data
In 1997 the Swedish Parliament introduced a “Vision Zero” policy that requires reducing fatalities and severe injuries to zero by 2020.
Nowadays, many cities worldwide have Vision-Zero programs to reduce the number of accidents and fatalities on their roads. In most dense cities, such as New York or San Francisco, over a third of severe and fatal collisions occur on a handful of streets. To address the situation, municipalities usually target those areas with practical safety improvements like protected bike lanes, wider sidewalks, and reduced traffic speeds.
While this has helped reduce the number of accidents in some areas, the goal of Vision Zero (no fatalities) is elusive. In San Francisco, ten pedestrians died in 2021, 15 in 2019, and 11 in 2016.
To advance the protection of drivers, cyclists, and pedestrians, accurate mapping of the environment is critical. While traffic cameras provide some basic information, operational safety standards require backup sensors – “redundancy” – for all chain elements, from sensing to actuation, and beyond.
One example is the use of Google Maps to detect traffic congestion. As many smartphone users keep their location feature active, Google servers can see the volume and speed of traffic, accurately updating their maps to show where traffic is fluid, or if a tie-up occurs.
That’s why some vehicle manufacturers, in collaboration with traffic authorities, are now starting “crowd-mapping” projects, harvesting the proliferation of camera-based systems to build and maintain in near-real-time an accurate map of the environment. Basically, the connected cars with different sensors transmit compressed data about the area geometry and landmarks around the vehicle. With precise location information, that data is then sent to the cloud for aggregation and analysis. Then the system updates its databases, which are further used to update highly accurate maps for the connected vehicles and traffic authorities.
Furthermore, any vehicle in the network can alert potential hazards in real-time without the driver’s intervention. For example, suppose part of the road is blocked by a fallen tree or a non-functioning traffic signal. In that case, the onboard computer can automatically send an alert to the relevant services automatically.
When enabled, these crowd-mapping and real-time detection systems could help to reduce accidents and remove potential hazards anywhere, anytime.
Standards and collaboration are paramount
Obviously, a set of common standards and collaboration between stakeholders is critical to enable a shared vision of the road environment and address potential hazards.
It is estimated that a fully autonomous car (level 5 or 6) could be involved in an accident once in every 100 dangerous situations. If this failure rate is maintained in ADAS systems with the correct information and connectivity, where the driver is still active and responsive, it could avoid 99% of car accidents.
Today, millions of vehicles worldwide are equipped with sensors, processing power, and connectivity to enable a safer mobility future. It is up to the vehicle manufacturers, traffic authorities, and other stakeholders to find ways to share information and, ultimately, arrive at the Vision Zero target.