Self-driving cars: observations and interpretations

Article By : Brian Dipert

It's been a rough couple of months for the embryonic autonomous vehicle industry. Here are one engineer's thoughts on the recent setbacks.

It’s been a rough couple of months for the embryonic autonomous vehicle industry. In mid-March, an Uber test car struck and killed a woman pushing a bicycle across the street. Only a few days later, a Tesla Model X in Autopilot mode crashed into the safety barrier section of a highway divider on highway 101 in Mountain View, CA, killing its driver. And in early May, a Waymo self-driving vehicle was involved in an accident in Chandler, AZ, although in this particular case it’s not clear from initial reports that autonomous technology was at fault: a sedan apparently swerved into the Waymo van while avoiding another collision.

Tesla crash
This Tesla Model X in Autopilot mode crashed on highway 101 in Mountain View, CA.
Dean C. Smith/Twitter

Thoughts on these seeming setbacks? Why yes, I have a few; thank you for asking.

Self-driving tech is currently overhyped
I’ve said it before, I’m saying it now, and I strongly suspect I’ll keep saying it in the future: while Tesla’s Autopilot may be a clever marketing moniker, it’s also a gross over-representation of functional reality. And no amount of “always pay attention and be ready to take over control of the vehicle” legalese wrapped around it can counterbalance the corporate irresponsibility behind its naming. It’s fundamentally why idiots like this guy, who decided it would be a good idea to put his Tesla in Autopilot and then slide over to the passenger seat while rocketing down the M1 motorway, feel empowered to indulge their Darwin Awards urges. And sadly, he was probably spot-on when, after being reported by a fellow freeway traveler and subsequently arrested, he concluded that he was just “the unlucky one who got caught.”

ADAS is here now
To be clear, I get it. Semiconductor, subsystem, and software suppliers are hungry to sell vehicle manufacturers more, and ever more expensive, building-block technologies and products. Vehicle manufacturers are hungry to sell consumers new, and ever more expensive, cars and trucks. And upstart vehicle manufacturers are hungry to use autonomy and other innovations to differentiate themselves from their older, more conservative competitors. But there’s plenty of differentiation (and sales) potential already available, and in much safer implementation forms, with so-called ADAS (advanced driver assistance systems) features: front and rear collision passive warning and active avoidance, blind spot monitoring, lane departure warning, driver drowsiness detection, and the like. Like autonomy, ADAS capabilities also provide clear value to drivers and passengers alike but, because they provide assistance, versus acting as a substitute, they discourage vehicle occupants from fully indulging their stupidity.


Self-driving accidents are also overhyped
Much of what Elon Musk said to analysts in his early-May earnings call was, to put it charitably, unwise (not to mention Musk’s reported similarly recent decision to hang up on the chairman of the National Transportation Safety Board). But I completely agree with this part: “Broadly there’s over a 1 million, I think 1.2 million automotive deaths per year. And how many do you read about? Basically, none of them. However – but, if it’s an autonomous situation, it’s headline news … And this will be true, even if electric cars were – sorry, if autonomous cars were 10 times safer, so if instead of a 1 million deaths you had 100,000 deaths.”

Autonomy will (probably) save lives (eventually) because humans are imperfect
I grant Ars Technica’s point that, no matter how Tesla may want to repeatedly trumpet stats that the NHTSA (National Highway Traffic Safety Administration, the stats’ original source) is even backing away from, there isn’t yet a critical mass of data to definitively determine whether or not, and if so to what degree, autonomous vehicles (along with their more common ADAS-equipped precursors) prevent accidents and consequent injuries and deaths to occupants and pedestrians alike. These are early days, after all. But it still feels like the answers “yes” and “significantly” are foregone conclusions, aren’t they?

Drivers get bored. And sleepy. And distracted. Sometimes they even intentionally distract themselves. All behaviors that autonomous systems don’t exhibit. Much (unfair, IMHO) editorial angst was heaped on the human operator acting as backup to the autonomous Uber “driver” after video footage revealed that he’d briefly looked down shortly before the collision with the pedestrian. With all due respect to the deceased and her family and friends … that’s what human beings do sometimes, even assuming the person in the car would have been able to see (more on that next) and react quickly enough if he’d been looking up at the time.

Here’s another example: security systems. Computer vision is in the process of rapidly displacing human-based systems, a process that began with military, airport, industrial, and other high-end installations and is now broadening to retail and consumer setups, because humans are unreliable. Expecting a security guard to constantly stare at a bank of monitors and discern an intruder with 100% reliability is delusional. And even after-the-fact sorting through archive footage to find that intruder, without autonomous assistance, is too time-consuming.

[Continue reading on EDN US: Don’t judge vehicles using human limitations]

Brian Dipert is Editor-in-Chief of the Embedded Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related articles:

Leave a comment