Despite the non-technical media reports regarding the demise of the autonomous automobile in light of a recent death in a Tesla Class S, I want to present the sensor electronics, in this article, that combined with better and refined software algorithms, will ultimately enable a safe, fully autonomous vehicle within the next ten years.

Tesla Motors has clearly and repeatedly expressed that the Model S is not a self-driving car. Tesla drivers are required to explicitly acknowledge "that the system is new technology and still in a public beta phase."

The recent BMW-Mobileye-Intel announcement stresses the “safety comes first” motto that must be adhered to in order to keep drivers safe. EETimes chief international correspondent Junko Yoshida put these events in perspective in her timely blog.

Even NPR’s Sonari Glinton commented that the Tesla Model S has self-driving features, autonomous elements meant to assist drivers rather than replace them in a recent radio discussion.

Google’s fully autonomous test vehicle relies on a more costly, complex remote-sensing system using LIDARTM.

Tesla Motor’s Elon Musk has commented:

"For full autonomy you'd obviously need 360-degree cameras, you'd need probably redundant forward cameras, you'd need redundant computing hardware, and then redundant motors and a steering rack. ... That said, I don't think you need Lidar. I think you can do this all with passive optical and then with maybe one forward radar." Musk is endeavouring to make a relatively affordable, but safe, selection of vehicles for the consumer market.

Tesla Motor's Autopilot relies on a combination of "cameras, radar, ultrasonic sensors and data to automatically steer down the highway, change lanes, and adjust speed in response to traffic." It uses auto braking technology supplied by the Israeli company Mobileye, one of the three partners in a recent announcement.

The Mobileye Emergency auto braking in the Tesla was "designed specifically" for rear-end collision avoidance, and not a lateral collision like the fatal case revealed in the unfortunate July 1 accident. The technology is being rectified as we speak with continuous, on-going developments such as in Reference 3, which outlines a dynamic control design for automated driving of vision-based autonomous vehicles, with a special focus on the coordinated steering and braking control in emergency obstacle avoidance. Reference 4 also has an innovative approach to help avoid the recent accidental loss of life.

Google has self-driving cars in several states; Tesla has announced that its cars will be fully autonomous in three years; and Uber has opened a test facility in Pittsburgh which will develop an autonomous taxi fleet5.

Toyota has mentioned their self-driving highway car with 12 sensors for capturing data, which include: 1 camera module behind the front mirror, 5 radar devices that use radio waves to capture speed of other cars and 6 lasers that detect position of objects around the car5.

Finally, the regulatory framework for testing and operation of autonomous vehicles on public roads has already been established in California5.

But, we are not yet ready for fully autonomous vehicles; however, we will be in the relatively near future; until then, drivers must keep their hands on the wheel and eyes on the road.

Sensors are the eyes and ears of autonomous driving

Ears? How about hearing a siren from an approaching police car, ambulance or fire engine and determining the direction from which it is coming before it can be seen by or even in conjunction with the vehicle’s cameras or LIDAR? Or the sound and direction of an approaching motorcycle before it is seen visually by a human driver, LIDAR or camera?

Let’s look at this sensor technology of autonomous vehicles, not from a novice’s eye but with an engineering designer’s view of the electronic sensors that will enable this technology which, in turn, couples to the vehicle’s brain, which is composed of advanced processing and software algorithms.

The Eyes

LIDAR

LIDAR is a system that uses rotating laser beams. This technology is being used in the experimental autonomous vehicle being developed by BMW, as well as those by Google, Nissan and Apple. This laser-based system will have to come down enough in price in order to be used in mass-market cars. In the next few years it will.

LIDAR basics

auto 01 Figure 1: A single emitter/detector pair, rotating mirror LIDAR design. This single laser emitter/detector pair combined with a moving mirror enables scanning across at least one plane. The mirror not only reflects the emitted light from the diode, but also reflects the return light to the detector. By implementing a rotating mirror in this application, we can achieve typically 90–180 degrees of azimuth view while simplifying both the system design and the manufacturability, since the mirror is the only moving mechanism. (Source: Velodyne)

auto 02 Figure 2: Shown here is an application of a conventional single emitter/detector pair laser. (Source: Velodyne)

High Definition LIDAR (HDL)

Velodyne

Velodyne has a High Definition Lidar (HDL) sensor that is able to meet the stringent demands for autonomous vehicle navigation. Their HDL unit provides 360- degree azimuth field of view and 26.5 o elevation field of view, up to 15Hz frame refresh rate, and a rich point cloud populated at a rate of one million points per second.

Light pulses have previously been used to measure distance. This elementary technology emits a light pulse from a laser diode. The light travels until it reaches a target, at which time a portion of the light energy is reflected back towards the emitter. A photon detector is mounted near the emitter and detects this reflected signal. The time difference between the emitted and detected pulse determines the distance of the target. When this pulsing distance measurement system is actuated, a large number of sampled points (designated as a “point cloud”) can be collected.

If there are no targets present, then the light would never return any reflected signal. If the light was pointed towards the ground, then the road surface would provide a signal return. If a target was positioned within the point cloud, then a notch would be seen in the data. The distance and width of the target can be determined from this notch. With this collection of points in the point cloud, a 3D picture of the surrounding environment begins to emerge. The denser the point cloud the richer the picture becomes.

Methods for 3D LIDAR

One method of achieving 3D LIDAR is to move the emitter/detector pair up and down while the mirror rotates (sometimes referred to as “winking” or “nodding”). This will generate elevation data points, but it will reduce the number of azimuth data points, thus the point clouds are less dense, resulting in a lower resolution system.

Another method is “flash LIDAR.” These systems operate by simultaneously illuminating a large area, and capturing the resultant pixel-distance information on a specialised 2D focal plane array (FPA). These types of sensors are complicated and difficult to manufacture, and consequently not widely deployed commercially. However, it is expected that they may someday replace mechanically actuated sensors, as they are solid state and require no moving parts.

There are many 3D point cloud systems with several configurations. For the safety needs that autonomous vehicle navigation demands, current systems are not there yet. For example, there are numerous systems that take excellent pictures, but take several minutes to collect a single image. Such systems are unsuitable for any mobile sensing applications. There are also flash systems that have excellent update rate, but lack field of view and good distance performance. There are single beam systems that can provide useful information, but do not work well with objects that are either too small or fall outside the unit’s field of view.

In order to adapt to the greatest number of uses for a LIDAR sensor it is necessary to see everywhere around the collection point—a full 360 degrees. In addition, the processed data needs to be presented to the user in real time, so it is necessary to have a minimum of delay between the data gathering and rendering the imaging. For example, in the arena of autonomous navigation, it is generally accepted that human response time is in the several tenths of a second. Therefore, it is realistic to provide the navigation computer with a complete fresh update minimally ten times a second. The vertical field of view needs to extend above the horizon, in case the car enters a dip in the road, and should extend down as close as possible to see the ground in front of the vehicle to accommodate dips in the road surface or steep declines.

Miniaturising LIDARTM in 2016

Velodyne has raised the bar with its VLP-32A, designated as ULTRA Puck. The company claims that this is the first affordable LiDAR sensor capable of addressing vehicle automation levels 1-5 as defined by SAE. Ford is the first automaker to order the ULTRA Puck for use in its recently announced expanded autonomous vehicle development fleet, which will hit the roads later this year.

auto 03 Figure 3: Velodyne’s ULTRA Puck enables a range up to 200 metres and supports 32 LiDAR channels, with enhanced resolution to more easily identify objects. Its 32 channels are deployed over a vertical field of view of 28° and configured in a unique pattern to provide improved resolution in the horizon. (Source: Velodyne)

Leddar

Leddar’s LIDAR solutions have also really impressed me.

auto 04 Figure 4: *Leddar has stand-alone advanced driver assistance systems (ADAS) as well as affordable LiDAR used in passive and active safety systems. With Sensor Fusion, Leddar can deliver raw data measurements for advanced sensor fusion integrations. Their Sensor Redundancy overcomes limitations of other sensors, prevents system failures and detection errors, and improves detection in various environmental conditions. Their 360-degree point-cloud capability, for autonomous driving, meets the necessary requirements for more resolution and performance increases, with a technology roadmap to support 360° and high resolution detection. (Source: Leddar) *

Affordable LiDAR for all levels of autonomous driving, including the capability to map the environment over 360-degrees around the vehicle will be critical to driver, passenger, pedestrian and people in other vehicle’s safety.

The Leddar chipsets have a raw data output that makes them suited for advanced sensor fusion solutions that combine data from various types of sensors to provide a holistic perceptual mapping of the vehicle’s surrounding:

  • Integration into ADAS and autonomous functions, where solid-state LiDAR replaces or complements camera and/or radar.
  • High density 3D point cloud LiDAR for higher levels of autonomous driving
  • Support for both flash / beam steering LiDARs (i.e.: MEMS mirrors)
  • Ranges reaching 250m, field of view up to 140º, and up to 480,000 points per second (with a resolution down to 0.25º both horizontal and vertical)

Innoluce

Finally, INNOLUCE has a 250 m target range at 0.1° resolution to enable, what the company claims, is a $100 automotive LIDAR.

Next: Vision image sensors