Understanding the math behind self-driving cars

Article By : Julien Mottin

The SAE categorises the capabilities of a fully autonomous car as Level 5, the highest category, but Leti researchers devised a novel solution to address challenge of enabling a fully self-driving car.

The concept of autonomous cars has been continually making headlines across various industry reports. While many questions about safety and regulation of self-driving cars remain unanswered its enabling technologies are presently transforming the automotive landscape.

Late-model cars are really like a computer on four wheels: 95% of the innovation is now in their electronic components. Technology blocks for radar, ultrasonic sensors, camera, lasers, hundreds of electronic computing units (ECUs), control software and connectivity are embedded in every new car.

However, the road to completely autonomous driving is still quite long, notwithstanding the legal and safety issues.

A fully autonomous car would be responsible for steering, acceleration and deceleration, as well as the comprehensive monitoring and understanding of the driving environment in all conditions and situations. The Society of Automotive Engineers (SAE), which dictates the standards and roadmaps for the automotive industry, categorises those capabilities as Level 5, the highest category.

Leti researchers have developed a novel solution to the challenge of enabling a fully autonomous car.

A trunk full of computers

Autonomous cars offering Level 3 functions, which include environment monitoring, are already suggested on high-end vehicles for 2017. Almost all car manufacturers have already showcased their Level 4 prototypes, in which the car is fully autonomous.

They demonstrate that the technology required for such advanced cars is available, with one significant drawback: computing requirements in prototypes fill the trunk with computers.

Level 3 was a milestone, because the car is responsible for monitoring and understanding the driving environment, and the human driver is relegated to a backup role. For the car, that means that the perception task cannot be restricted to tracking a limited set of objects (aka feature fusion or object fusion).

The vehicle must be able to evaluate the state of any location in the surroundings to build a comprehensive map of the vehicle environment (map fusion or grid fusion).

This map is constituted of cells, typically with a 10cmm x 10cm shape, to model an environment of at least 100m x 100m (200m in front of vehicle is often required). So this yields an environment model containing more than a million cells, whose states have to be refreshed each time sensors trigger new data, typically every 20ms to 50ms.

Moreover, to enhance robustness and safety, multiple heterogeneous range sensors are used (camera, radar, lidar, ultrasonic sensors) that provide the perception system with noisy data, possibly contradictory, which must be fused in a way that can cope with sensor uncertainty.
In this map, we want to detect every possible obstacle of the surroundings, but also every bit of free space, to provide the vehicle with safe zones to plan emergency avoidance or simple navigation. This definitely requires a huge amount of computation.

Roadblocks at Level 5

Level 4 cars can only be operated in fully autonomous mode when driving conditions allow it (on the highway, ambient visibility). In the ultimate level, the car operates autonomously in all conditions. No one knows whether that level will ever be reached. But technology will not be the barrier. It is more complex than that: it would reshape an entire industry (worth several trillions of dollars) and redefine its entire value chain and business models.

Moreover, the societal impact of disrupting accepted transportation customs could be broad and deep, at least initially. For example, parents could send children to school alone in a car programmed to go there and back. Would school-bus drivers lose their jobs? And will driverless cars compete with taxis? Will there be more car-pooling or less?

But the good news is the first four levels do not present such wide-ranging societal issues and barriers. The challenges are merely technological, e.g. how to deal with automotive electronic requirements?

One avenue is consumer-electronic solutions. Nvidia CEO, Jen-Hsun Huang, recently announced a super powerful platform called Drive PX2 capable of eight trillion operations per second: such computing power might be sufficient for Level 3 application.

But how would such actively cooled platforms behave after 10 to 15 years of intense usage? How would they age? How would they handle the car’s vibrations? How would they match certification and safety requirements? Unfortunately, the answers to such questions are known, which explains why the automotive industry is still looking for solutions.

A disruptive concept

A second solution would be to rethink entirely the foundations of the autonomous driving applications. We must not try to achieve Level 3 by incrementally building on top of Level 2 solutions.

We need to think of new computing paradigms and systems architectures that are strongly connected with all the vehicle’s sensors from scratch. We must not try to shrink robotic demonstrators to fit inside automotive requirements. Many good opportunities can be found taking a closer look at the sensors. Sensors with a large field of view lack angular resolution to precisely locate obstacles, but they are really good at detecting free space.

Our research group at Leti has started down this road with a new concept in the form of an arithmetical formulation of probabilistic fusion that has proven to be mathematically equivalent to the original robotic framework introduced by A. Elfes in the late 1980s.

This formulation removes the need of floating point operations, and reduces the computing requirements by a factor of 100. With such an approach, it is feasible to execute the perception task on already existing, already certified automotive platforms.

Leti’s Σfusion is an embedded data-fusion demonstration of this concept that enables environment perception in autonomous vehicles. It provides surrounding modeling based on range sensors. The benefits are: an innovative formal approach guaranteeing the traceability of sensors accuracy; accurate and fast environment perception; low-power and easy integration; simplified memory representation; certification thanks to errorless operations; real-time performance achieved in a mass-market MCU; and enabled development of new critical functions.

[EDNAOL 2016JUN08 AUTO NT 06]
__Figure 1:__ *(Source: SAE International)*

Leave a comment